Evidence Based Medicine is one of those things that is a good thing, but was pushed so hard by its proponents that it ended up overemphasizing a particular kind of study as the only real way to know things in medicine.
Yes, absolutely, medicine should be evidence based. Yes, large randomized, double blind, placebo controlled studies provide a lot of information.
However, there are limitations with these kinds of studies.
First, it may not be ethical or practical to study some things in this manner. For example, antibiotics for bacterial pneumonia has not had a randomized, double blind, placebo controlled study.
Famously, there was an article discussing how parachutes in jumping out of airplanes had not been subject to a randomized, double blind, placebo controlled study. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC300808/
Later, somebody did that study: https://www.bmj.com/content/363/bmj.k5094 and found that parachutes made no difference, but it is not applicable to any real world case where you would use a parachute.
Which illustrates the second issue with evidence based medicine. Many times, the large trials's main thing they are measuring are different than what you really want to know, or the population they studied has significant differences from the patient who is right in front of you. How to apply the results of the large study to the individual patient in front of you is still more of an art than a science.
Finally, I think there is the example from machine learning. It has turned out that instead of creating more and more rules, feeding lots and lots of data to a neural network ends up performing better in a lot of machine learning cases. In a similar way, an experienced physician who has treated thousands of patients over decades has a lot of implicit knowledge and patterns stored in their (human) neural networks. Yes, these decisions should be informed by the results of trials, but they should not be discounted, which I think that Evidence Based Medicine did in at least a small part. During my residency, I worked with physicians who would examine and talk with a patient and tell me that something is not right and to do more extensive tests which would end up unearthing a hidden infection or other problem that we were able to treat before it caused major problems. They were seeing subtle patterns from decades of experience that might not even be fully captured in the patient's chart, much less a clinical trial without thousands of participants.
So yes, these clinical trials are a very important base for knowledge. But so is physician judgment and experience.
The biggest issue, IMHO, is that clinical trials are often unethical. This is both in theory and especially in practice. I say this as a physician and clinical trial investigator.
EBM deals with this by saying ‘there is no viable alternative’, a remarkable statement of epistemological nihilism that enables much low quality snd pointless research.
Can you give an example of unethical trials where “there was no viable alternative” was what got the trial past an IRB? I’m more familiar with inverse complaints that trials are blocked by red tape and hypothetical concerns that are objectively small in actual QALY harm.
(I’m sure this varies by jurisdiction too; I have only heard bad thing about US IRBs)
IRBs do not evaluate the value of a research endeavour. They are in fact unable to do this due to lack of knowledge and expertise. They approve trials which fit the mold of trials they have seen before. Why does a clinical trial get done? The main reason is that someone, usually a pharmaceutical or device company, is willing to pay for it.
Some recent examples of problems with clinical trials:
I'd have assumed it was when they give sick people the placebo when they have a reasonable hunch (but not a published study that people trust) that the real medication would actually save them
I think of EBM as "where stronger evidence exists it trumps expert opinion" (almost all of the time). In other words, go down the hierarchy[0].
> I worked with physicians who would examine and talk with a patient and tell me that something is not right and to do more extensive tests which would end up unearthing a hidden infection or other problem that we were able to treat before it caused major problems.
Accordingly I don't view this as discordant with an evidence based medicine practice as you're not practicing in an area with clinical-trial evidence (and notably, expert opinion is evidence albeit weak). If you told me you did routine urinalysis and blood cultures on all your admissions I would view that as discordant and incorrect practice, regardless if expert dinosaur feels it saves lives in their experience.
I also view EBM as opposed to "science-based medicine". Just because something has a (theoretical) scientific basis it does not meet the standard to enter my routine practice, I need stronger evidence for that which notably does not have to be in the form of a RCT as you suggest[0].
> How to apply the results of the large study to the individual patient in front of you is still more of an art than a science.
Frankly, the practice of medicine is more art than science and EBM is a guiding principle that keeps us grounded to measurable outcomes.
This is a central point in the new book Outlive by Dr. Peter Attia. RCTs are great, but realistically we're never going to have long-term RCTs that give clear evidence on prevention of the chronic diseases that will probably kill most of us. A long RCT might measure the effect of a certain intervention over like five years but we need to think in terms of decades. No one is willing to fund those studies, and even if funding wasn't an issue the evidence would come too late for those of us alive today. So, we have to rely on weaker forms of evidence evaluated in a cost (or risk) versus benefit framework.
Yeah and the other thing is that with an advanced enough understanding of statistics you can make a study say almost anything and many physicians won’t be able to tell.
Source: My dad is a physician and I am an economics PhD who knows a lot of statistics and talks with him about these things.
Given how much evidence-free stuff is being used in medicine, it's not having been pushed anywhere close to hard enough (and the parachute example is just silly)
Another problem I stumbled over is how evidence based medicine makes it increasingly more difficult to deviate from established routines and modalities. Long time existing methodologies will by nature of having been around longer have a larger pile of evidence backing up their efficacy, compared to a new method, that might perform better, but has limited patient study data to back that up. I've seen how this stalls uptake of otherwise evident (non patient trial data based) improvements. I even seems that some manufacturers are very well aware of this, and are thus using their fortunate position of having to only incrementally improve methods at very low R&D cost.
> Later, somebody did that study: https://www.bmj.com/content/363/bmj.k5094 and found that parachutes made no difference, but it is not applicable to any real world case where you would use a parachute.
As a physician you are likely aware, but for anyone reading who isn’t: this paper is from the Christmas issue of the BMJ, which publishes “joke” studies. It’s not really meant to be taken seriously in any way.
But there is a serious point to be made, of course. This study involved jumping from stationary airplanes on the ground, which negates the whole point of a parachute (and hence, the control group survived just fine). It therefore "proved" that you don't need a parachute when jumping from an airplane, on the assumption that the results extrapolate to higher altitudes.
Nonsense, of course. But then there's a lot of randomized, controlled trials out there that are just as flawed, only in ways that are non-obvious to non-physicians, or even physicians with different specialities. "Study X proved Y" is never as straightforward as it seems to the lay public.
Yes, like all good satire there is a serious point behind it, but I think the way the GP referenced the two articles doesn’t make it clear that it’s a satire as opposed to a real example of poor/flawed EBM.
The GP didn't talk about a certain nuance and used evidence based medicine to cover all forms of science.
A double blind placebo trial is the gold standard for testing causality. We don't have to go that far. We can use a hybrid of intuition and correlational tests, we can maybe sometimes not always need a placebo. The point is the causal test comes with a lot of technical challenges and we have options for other tests that are less challenging.
There is a spectrum of correctness and rigor for science and we should know when to utilize something extremely rigorous or less rigorous. The barbaric practices you describe only come from an lack of awareness of what statistical rigor and science is.
The main flaw in EBM is that it inverts the scientific method: It’s no longer about hypotheses and falsification. Instead it’s about “scientifically proven facts” (aka “evidence”). Sadly this aligns much better with our dogmatic instinctive understanding of knowledge as a set of agreed truths.
It's because medicine is as much an art as a science given the impossibility of problem space, also whenever we ignore evidence we end up harming/killing people.
It is true that in science nothing can be proven but this fact doesn't apply if you think in terms of statistics. Science is broad and doesn't always give answers in terms of a boolean falsification. You can be given a ratio as the result.
This result is usually A probability established to a degree within the context of a sample size. This is the closest thing to proof that science can offer and we just go with it.
EBM goes a bit further then just a probability in the context of a sample size. There's a further degree of rigor in the qualitative sense that the probability is causal and not correlative. It's subtle but causality is even closer to proof then a correlative study.
There was a great book about Evidence Based Medicine, from a woman out of Oxford, I read a long while ago. Can't for the life of me remember the name, but Oxford does have a center for "Evidence Based Medicine," and the whole idea has been slowly coming back into fashion.
Instead, all I can recommend is a collection of essays -- loosely -- related to the subject: Where's the Evidence?, Silverman
---
On another note, someone who's gone down the pharmaceutical research rabbit hole and concluded a lot of it is "bunkum" (e.g. statins, anti-depressants, and so on), would do well to look into surgery next -- especially that of surgical implants.
Some of worth looking into: joint replacements, joint and spinal fusions, angioplasties.
Of note: spinal fusions were not brought about on any scientific or experimental bases, but on hunches from surgeons (damnable bunch). Many do result in a reduction in pain, but it does bring to question how much of it is simply "placebo." There were a few experiments in this realm of doing double blind (or single blind, it escapes me right now) studies on back surgeries, that showed evidence the placebo effect was involved. The simple act of being put under anaesthesia, and convinced one had gotten "treatment," resulted in reductions in pain for those who came in for spinal surgery.
I do wish more research would be done into the placebo effect, and doctors would stop faffing about in self-righteousness (though having an air of expertise and authority does impact the placebo effect positively). And I don't mean in a "the placebo effect exists and is usable as a treatment" way, but in a "the placebo effect exists, and here is how it works in the body." That would be very interesting. Long ago I read a few papers on the placebo effect in regulating blood pressure via regulation of a certain chemical (it could have been a hormone like renin, but I can't recall). At the time, I thought the autonomic nervous system would be a good avenue to research to better figure this out (it would be a convenient explanation for how the placebo effect works: activating the nerves in the kidneys via the cross-talk from the central nervous system to excrete certain hormones). If it can affect the kidneys, what other bodily systems can it impact? The possibilities are not endless, but rather exciting.
I believe the "placebo effect" is the reason chiropractors are so popular, in spite of there being no real physical evidence as to why their treatments help people.
> Of note: spinal fusions were not brought about on any scientific or experimental bases, but on hunches from surgeons (damnable bunch). Many do result in a reduction in pain, but it does bring to question how much of it is simply "placebo."
You might be interested in Surgery, the Ultimate Placebo by orthopaedic surgeon Ian Harris.
I do agree things should be studied but you have to be careful with it. Studies are big and expensive and things can be missed. Remember the big flap over hormone replacement therapy being shown harmful? Oops--all it really showed is what we knew all along, estrogen is risky for fat women.
No, I specifically was referring to fat women. Check the recommendations on birth control pills--same thing, older + fat makes them risky. The problem with the big study is that their sample was disproportionately overweight.
Unopposed estrogen (i.e. the old estrogen only HRT) is bad for women of all sizes, endometrial cancer sucks.
Separately, obesity causes higher system estrogen levels and carries the same risks.
What you may be referring to is the more recent WHI study which does have methodological flaws, but unopposed estrogen is a no-no for patients with a uterus.
How can you imply a woman past menopause might possibly want sex?? And who would be willing to satisfy that desire, anyway??
Actually, it's probably a bad idea to combine them because the body's response to hormones is so variable. Keep them as separate pills so you can tweak the balance easier.
Estrogen/progesterone and testosterone are not exclusively female/male hormones. Testosterone may boost sex drive and increase muscle mass, as well as provide some psychological benefit in women. Estrogen is the most important regulator of bone health in both men and women. There are people with estrogen insensitivity syndrome, both men and women, and from all reports they are having an extremely uncool time:
It would be part of HRT for menopause for the reasons I gave: mental health, sexual desire, muscle maintenance.
The discussion about estrogen in men is just for context. It's not unusual to talk about levels of any hormone in men or women. There's nothing "shocking" about testosterone in women, or estrogen in men.
I was fixating on semantics but it's not the point.
Not my area but for what it's worth UpToDate (KA Martin, RL Barbieri, JL Shifren @ MassGen Brigham) address it in expert opinion form:
> We do not suggest the routine use of androgen [testosterone] therapy for postmenopausal women. Levels of endogenous androgens do not predict sexual function for women; however, androgen therapy that increases serum concentrations to the upper limit or above the limit of normal for postmenopausal women has been shown to improve female sexual function in selected populations.
The linked out sexual dysfunction article (JL Shifren):
> In our practice, we rarely use testosterone, but will prescribe it when greatly desired by a peri- or postmenopausal patient with low libido associated with distress who has no contraindications to testosterone therapy or identifiable etiology for sexual dysfunction and is otherwise physically and psychologically healthy. Typically, the patient has already tried other safer interventions prior to the testosterone prescription, including low-dose vaginal estrogen, relationship interventions (eg, sex therapy, date nights, use of sexual aids such as vibrators, books), and adjustment of antidepressant medication (when indicated) [12]. At least one visit with a sex therapist is strongly advised prior to pharmacologic treatment, as this safe and effective intervention may make pharmacologic therapy unnecessary or enhance the response to treatment. Testosterone levels should not be used in determining the etiology of a sexual problem or in assessing efficacy of treatment, as no clear association between androgen levels and sexual function has been found in several large, well-designed studies.
What makes you conclude that statins are "bunkum"? I don't know anything about anti-depressants or surgical implants, but I have gone down the rabbit hole on statins and at least as far as I can tell they do what they are generally claimed to do, are safe, and so if you are at high risk for hear disease you should probably be taking them.
- Absolute risk reduction in CVD-caused mortality is mild at best (<1-2% absolute risk reduction)
- Percentage of people that get side-effects is higher than the percentage of people that receive a benefit from statins; as well, the side-effects are rather serious in affecting QoL -- making them a poor choice as a long-term prophylactic
- Generic lipid panels that the vast majority of people take are generally worthless for estimating lipid health: LDL "large particle -- bad" cholesterol is not actually measured, but estimated. The current formula is: (total cholesterol - HDL - triglycerides)/5 which is liable to under-report true LDL levels. Estimation also has the drawback of being unable to tell how much of each "bad" particle is actually in your blood (chylomicrons vs VLDL vs LDL etc.) -- muddying your actual risk profile. Likewise, no LDL "content" tests are performed to measure how much cholesterol each LDL is actually carrying. So yes, statins will lower these numbers, but the methodology around these numbers is flawed, and only loosely correlated with cardiovascular health
What patient population are you talking about? Statins have excellent evidence behind them.
> - Absolute risk reduction in CVD-caused mortality is mild at best (<1-2% absolute risk reduction)
It depends what your baseline risk is and what time point you're looking at.
> - Percentage of people that get side-effects is higher than the percentage of people that receive a benefit from statins; as well, the side-effects are rather serious in affecting QoL -- making them a poor choice as a long-term prophylactic
This is just completely false, even from 2013 data[0] (which overestimates diabetes) but is better addressed in a subsequent review [1].
The side effects, if they do happen, are also self-limiting and stop with cessation or changing agents.
> - Generic lipid panels that the vast majority of people take are generally worthless for estimating lipid health ... and only loosely correlated with cardiovascular health
Your article has this caveat: "Virtually all of the major statin studies were paid for and conducted by their respective pharmaceutical company. A long history of misrepresentation of data and occasionally fraudulent reporting of data suggests that these results are often much more optimistic than subsequent data produced by researchers and parties that do not have a financial stake in the results."
Ignoring that, 2% (absolute) of the population on statins develop diabetes and 10% (absolute) develop muscle pain/rhabdo (in one experiment). This is also ignoring all of the serious adverse events from gen-1 and gen-2 statins.
As a whole, they on average achieve a 1.2% reduction in absolute mortality (all-cause or only from heart disease?).
While I have just read it, I'm going to discount [1], because of what's inside "Declaration of interests."
And on principle, I'm not going to read [2], unless you quote the relevant sections (as is good form when referencing).
As stated thennt is from 2013, many statin trials have come out since.
2% diabetes is overstated as in link 1 except you discount it because of declared disclosures despite the fact that it’s one of the most highly cited papers on the subject in the last 10 years and the study was a review.
You also discount a well respected guideline on lipids out of principle.
Then you cite unrelated data from 2006 as a good reference for an unknown reason?
Finally, you disregard the opinions of a Cochrane review in an unrelated patient population which directly contradicts your misinterpretation of the data (clearly have no concept of NNT/NNH as you just make capricious interpretations of ARR) yet cite yourself as more of an expert than the Cochrane authors.
For what it’s worth since you focus on conflicts it’s to my financial benefit if you don’t take your statins (and pharma pays me nothing), so by all means skip the statin at your own risk.
This really doesn’t seem like an open discussion so I’ll stop engaging. But you’re spreading misinformation for any reader, statins save lives.
I cannot make it anymore clearer: statins reduce your absolute risk of mortality by at most 2%; not smoking reduces your absolute risk by 7%. Statins provide meager benefit for the associated risks. It does not take an expert to do math.
Long term damage to liver. You use it long enough your liver will die. But it's too hard to do a causal analysis on this as the timelines are measured in decades.
“However, the rate of true statin-induced hepatotoxicity is exceedingly low. Moreover, multiple retrospective studies have shown that statins are not only safe for use in patients with cirrhosis, but are also likely beneficial in reducing liver decompensation, hepatocellular carcinoma, infections, and death.”
There are multiple different statins. The latest research indicates that some are more effective than you indicate, with a lower rate of serious side effects.
Which specific studies are you referring to? I think you might be looking at outdated research, or studies that didn't run long enough.
Seriously, if you're interested in this area then listen to the entire podcast that I linked above. It's packed with information from one of the leading researchers in the field and might change your mind about a few things.
Do you have a single citation to back this claim up? Unless you’re talking about prophylactic statins in patients with no known risk factors (I.e. primary prevention which is not standard of care) this is complete misinformation.
Taylor, Fiona, et al. “Statins for the Primary Prevention of Cardiovascular Disease.” Cochrane Database of Systematic Reviews, vol. 2021, no. 9, 2013, https://doi.org/10.1002/14651858.cd004816.pub5.
The authors' own conclusion seems to directly contradict your overall argument here:
"Implications for practice
The totality of evidence now supports the benefits of statins for
primary prevention. The individual patient data meta-analyses
now provide strong evidence to support their use in people at low
risk of cardiovascular disease. Further cost-effectiveness analyses
are now needed to guide widening their use to these low risk
groups."
And as haldujai mentioned this is explicitly regarding their use for primary prevention, not with regards to usage for secondary prevention which has strong supporting evidence.
Generally, you read the papers for their methodology and their data — not for the author commentary; and then make up your own mind. One man’s “benefits of statins for primary prevention” is another’s “the benefits are too meager to be notable.”
Please provide me literature from a reputable publication (viz. the AHA, Cochrane, or the New England Journal of Medicine), that has not been funded by a pharmaceutical company — that demonstrates strong supporting evidence for the usage of statins in secondary prevention; wherein the experiment does not extrapolate from LDL values to determine mortality risk (I will concede defeat if you can find any paper that utilizes CAC scans and shows a reversal in atherosclerosis), and/or shows a greater than 2% absolute reduction all-cause or CVD-only mortality risk.
You will not find such a paper, because it does not exist. Most funding has gone towards primary prevention in young adults — while little more than weak associative studies have been published for secondary prevention (and countless others I no doubt have never seen the light of day).
So every link I provided give you a risk of MACE. Reversal of atherosclerosis is not the outcome measure we care about lol. Certainly not lowering coronary calcium which is not possible. You’re literally making this up…
Statins work amazingly not just for LDL reduction but plaque stabilization.
As an aside a 2% ARR is huge, it means the number needed to treat is 50 to save a life. For something with next to no serious side effects, rhabdo/diabetes is dramatically overstated.
Pertinently, the number needed to treat for MACE is 39. That’s hugely significant.
Then we are at an uncrossable philosophical chasm.
I don’t consider 2% ARR huge — especially when the risks of side-effects have been down-played. We can argue about this all we want, but it’s no longer a matter of fact, but of opinion and values.
You seem to be misunderstanding how evidence works, it is not about “what you consider” and is entirely based on fact. We also don’t talk ARR in isolation when we decide on interventions, it’s NNT vs NNH and considering the specific risk being reduced and the specific harm.
I’ll use your 2% ARR for death although there are better numbers in different patient populations.
In other words: Statins will save 1 life for every 50 patients treated and prevent 1 in ~20-40 non-fatal cardiac events, a medically significant result period. The NNH is > 100, and the harm is a self-limiting myopathy (and a possible risk of accelerated diabetes-onset in observational studies, that is still outweighed by the reduction in all-cause mortality and MACE).
The evidence is unequivocal that the benefits far outweigh the harms.
Separately, you have a personal choice to take/not take any treatment, and you may personally feel treating 50 people to save 1 life is not worth it for you, because you subjectively feel the numbers don’t fit your personal risk/benefit model. This is where you are saying 2% ARR is insignificant to you but this says nothing about the evidence or rationale behind the treatment.
> risks of side-effects have been down-played.
Except every study looking at side-effects has shown they were overstated in the initial trial.
“The most severe complication of SI is discontinuation of effective cholesterol-lowering treatment in patients who, by virtue of their CVD risk and cholesterol level, might otherwise benefit.”
Linked study equivocating on the overstatement of statin side-effects has the first author as a pharmaceutical consultant (you do not see this as a problem, I do).
2% ARR is meager in relation to lifestyle changes that can account for 3x-15x the in ARR compared to statins (you do not see this as a problem, I do).
Those are the facts. How you interpret them is subjective.
All of the money and manpower thrown into statins, could have been thrown into smoking cessation programs or preventing onset of type 2 diabetes. This is an opinion.
Saving 1 life for every 50 patients, at the cost of untold resources, instead of saving 6-30 for every 50 is myopic. This is an opinion.
Medical significance is an opinion. The determination of significance is a subjective interpretation. This is not a "this idea concurs with my sentiments, so I will say it is so. Period." This is math. Statistical interpretation is an opinion. The difference in statin efficacy vs. lifestyle changes is a quantifiable fact. The difference is 3-15x. To take a statistical finding without incorporating it into the larger context is poor practice, bordering on deception (the former is a fact, the latter is an opinion).
Here are 50 people aged 50 y.o. from the general population:
> Generally, you read the papers for their methodology and their data — not for the author commentary; and then make up your own mind.
I'd say that generally it's advisable to take both into consideration given that in most cases the author of a paper likely has more domain expertise than you in that specific area. Not always, obviously, and not to the exclusion of an outside objective analysis of their data and results, but it's certainly more informative than referring someone to a page from a study with no additional context.
> that has not been funded by a pharmaceutical company
I get where you're coming from here, but it's kind of silly. And the question becomes where do you draw the line? Is a meta-analysis of a large group of studies each of which has been supported at least in part by funding from a pharmaceutical company guilty by association? That aside, the structure of research funding with regards to pharmaceuticals (at least in the US so far as I'm aware) makes the likelihood of conducting any long term, large scale study without receiving any funding from a pharmaceutical company vanishingly small. There have certainly been issues with studies funded and conducted by those companies, but that doesn't mean that all studies funded by them are instantly invalid. Nor does it mean that it's impossible to conduct a study that has received their funding without compromising its integrity. It is entirely possible to take sufficient measures to isolate those companies from the actual process and analysis of the research.
> wherein the experiment does not extrapolate from LDL values to determine mortality risk
As I made reference to earlier, I am not a subject domain expert here. That being said I did do a brief survey of the literature, reading fifteen papers published on either studies of statin efficacy or meta-analysis thereof. I may be misunderstanding what you're saying, but the studies I looked at assessed efficacy by looking at the actual number of cardiac events, strokes, et al suffered by those in the control and experimental groups. Their analysis was based on those numbers, not an extrapolation from LDL values.
> and/or shows a greater than 2% absolute reduction all-cause or CVD-only mortality risk.
Why that number? And why that number in two very different contexts? Regardless, statins have been show in numerous studies to be highly effective.
> while little more than weak associative studies have been published for secondary prevention
That's simply not true. There have been a number of large scale, long term studies on the efficacy of statins for secondary prevention and the preponderance of evidence is on the side of them being very effective.
Some of the studies I looked at:
Mega, J. L., Stitziel, N. O., Smith, J. G., Chasman, D. I., Caulfield, M. J., Devlin, J. J., … Sabatine, M. S. (2015). Genetic risk, coronary heart disease events, and the clinical benefit of statin therapy: an analysis of primary and secondary prevention trials. The Lancet, 385(9984), 2264–2271. doi:10.1016/s0140-6736(14)6173
- The primary focus of this study was looking at the efficacy of statins with relations to genetic risk profiles, but as a component of that we can see the overall efficacy of the statins across those risk profiles as well.
- "The relative risk reductions were
34% in low, 32% in intermediate, and 50% in high genetic
risk score categories in the primary prevention trials, and
3% in low, 28% in intermediate, and 47% in high genetic
risk score categories in the secondary prevention trials.
When the data were combined, the gradient of relative risk
reductions with statin therapy across low, intermediate,
and high genetic risk score categories were 13%, 29%, and
48%, respectively (p value for trend=0·0277, figure 2)."
- "With a focus on the primary prevention trials,
in JUPITER, the number needed to treat to prevent
one coronary event in 10 years was 66 for those individuals
with a low genetic risk score, 42 for those with an
intermediate score, and 25 for those with a high score. In
ASCOT, the number needed to treat to prevent one coronary
heart disease event in 10 years was 57, 47, and 20,
respectively, across the three genetic risk score categories."
MRC/BHF Heart Protection Study of cholesterol lowering with simvastatin in 20 536 high-risk individuals: a randomised placebocontrolled trial. (2002). The Lancet, 360(9326), 7–22. doi:10.1016/s0140-6736(02)09327-3
- "All-cause mortality was significantly reduced (1328
[12·9%] deaths among 10 269 allocated simvastatin versus
1507 [14·7%] among 10 267 allocated placebo; p=0·0003),
due to a highly significant 18% (SE 5) proportional reduction
in the coronary death rate (587 [5·7%] vs 707 [6·9%];
p=0·0005), a marginally significant reduction in other
vascular deaths (194 [1·9%] vs 230 [2·2%]; p=0·07), and a
non-significant reduction in non-vascular deaths (547 [5·3%]
vs 570 [5·6%]; p=0·4). There were highly significant
reductions of about one-quarter in the first event rate for non-
fatal myocardial infarction or coronary death (898 [8·7%] vs
1212 [11·8%]; p<0·0001), for non-fatal or fatal stroke (444
[4·3%] vs 585 [5·7%]; p<0·0001), and for coronary or non-
coronary revascularisation (939 [9·1%] vs 1205 [11·7%];
p<0·0001). For the first occurrence of any of these major
vascular events, there was a definite 24% (SE 3; 95% CI
19–28) reduction in the event rate (2033 [19·8%] vs 2585
[25·2%] affected individuals; p<0·0001)."
Sever, P. S., Dahlöf, B., Poulter, N. R., Wedel, H., Beevers, G., Caulfield, M., … Östergren, J. (2003). Prevention of coronary and stroke events with atorvastatin in hypertensive patients who have average or lower-than-average cholesterol concentrations, in the Anglo-Scandinavian Cardiac Outcomes Trial—Lipid Lowering Arm (ASCOT-LLA): a multicentre randomised controlled trial. The Lancet, 361(9364), 1149–1158. doi:10.1016/s0140-6736(03)1294
- "The primary endpoint of non-fatal myocardial infarction,
including silent myocardial infarction, and fatal CHD was
significantly lower by 36% (hazard ratio 0·64 [95% CI
0·50–0·83], p=0·0005) in the atorvastatin group than in the
placebo group (figure 2, table 3)."
- "There were also significant reductions in four of the seven
secondary endpoints, some of which incorporated the
primary endpoint: total cardiovascular events including
revascularisation procedures (21%); total coronary events
(29%); the primary endpoint excluding silent myocardial
infarction (38%); and fatal and non-fatal stroke (27%,
figures 3 and 4). All-cause mortality was non-significantly
reduced by 13%, with non-significantly fewer
cardiovascular deaths (figures 3 and 4) and no excess of
deaths from cancer (81 assigned statin vs 87 assigned
placebo) or from other non-cardiovascular causes (111 vs
130)."
It appears that the evidence in support of the use of statins is quite overwhelming.
> I'd say that generally it's advisable to take both into consideration given that in most cases the author of a paper likely has more domain expertise than you in that specific area. Not always, obviously, and not to the exclusion of an outside objective analysis of their data and results, but it's certainly more informative than referring someone to a page from a study with no additional context.
I do not agree. I do not have the time to elaborate further.
————
> I get where you're coming from here, but it's kind of silly. And the question becomes where do you draw the line? Is a meta-analysis of a large group of studies each of which has been supported at least in part by funding from a pharmaceutical company guilty by association? That aside, the structure of research funding with regards to pharmaceuticals (at least in the US so far as I'm aware) makes the likelihood of conducting any long term, large scale study without receiving any funding from a pharmaceutical company vanishingly small. There have certainly been issues with studies funded and conducted by those companies, but that doesn't mean that all studies funded by them are instantly invalid. Nor does it mean that it's impossible to conduct a study that has received their funding without compromising its integrity. It is entirely possible to take sufficient measures to isolate those companies from the actual process and analysis of the research.
Again, I do not agree. These are matters of values, and no arguments can be made for what we innately value. I draw a nuanced line based on my values, that I have tried to express here; but making it finer and finer will serve no purpose but as fuel for disagreement — because it is wholly subjective.
Possibility is not actuality. Most researchers are not a Platonic ideal: perfectly noble and virtuous and vigilant. They are real people: lazy, prone to error, requiring money to survive, self-interest at the very forefront.
I will not call your viewpoint naive, but it’s something that can only be formed when one’s exposure to this field is limited to papers and doctor’s visits.
> Why that number? And why that number in two very different contexts? Regardless, statins have been show in numerous studies to be highly effective.
Because 2% ARR is the highest change I’ve seen in any statin experiment — in either context. I do not consider one out of every fifty people being saved by a statin significant, or my definition of “highly effective.”
> It appears that the evidence in support of the use of statins is quite overwhelming.
My patience for reiterating this point is gone: relative changes are not absolute changes.
A starting risk profile of 2.25%, reduced to 1.25%, will have been reduced an absolute 1%, but a relative 44%.
This is why you read the methodology, and not the authors’ interpretation of their own data.
Ah, the human as a specimen in a petri dish approach to so-called health care. Flies directly in the face of study after study after study that says: diet and lifestyle are major factors in all deadly diseases.
Probably driven by modern tech, completely inadvertently, because x-ray machines and MRIs and such demand that the patient go to the clinic or hospital rather than the doctor going to the patient's home.
Star Trek's Dr. McCoy and his tricorder was a dream of tech that you could carry in the proverbial little black bag. We aren't there and have forgotten a lot of important principles in the process of pursuing shiny tech.
But it's generally a bad idea to critique any of that. Gets one nothing but hatred.
Ashkenazi women often have genetic mutations that cause them to get breast cancer and the cancer prognosis is much worse than average. This cannot be addressed by diet or lifestyle. However, genomic sequencing detected common patterns that can be tested for, and with that information, women can make informed choices about treatment or other actions, ideally well before they ever have a malignant tumor.
I won't disagree that we could do a lot better overall with large-scale changes in diet and lifestyle. But it still would leave a lot of people dying of heart disease and cancer as well as infectious diseases.
Those x-ray machines and MRI machines make a huge difference for people with internal injuries as well. No amount of diet and lifestyle will heal a shattered bone.
All the comments I make on Hacker News are done in good faith and assume good will on the part of the person I'm replying to. If you really think I misread, please rewrite what you wrote; as far as I can tell, I read and interpreted it clearly. It's best if you come out and say things directly, while your statement's thesis isn't entirely clear, but I read it as:
Flies directly in the face of study after study after study that says: diet and lifestyle are major factors in all deadly diseases. <- yes, this is true. However, it's not a useful statement when criticizing medicine based on technology. Even if everybody in the world had a "perfect" diet and lifestyle, it wouldn't address the majority of diseases at a very large scale.
Probably driven by modern tech, completely inadvertently, because x-ray machines and MRIs and such demand that the patient go to the clinic or hospital rather than the doctor going to the patient's home. <- OK, not sure what to say about this other than, it's an unfair comparison; there's plenty of in-home care and in-home doctors can't resolve a wide range of issues at a person's house. It also brings risk to the doctor, as well as causing them to spend their day travelling around.
Star Trek's Dr. McCoy and his tricorder was a dream of tech that you could carry in the proverbial little black bag. We aren't there and have forgotten a lot of important principles in the process of pursuing shiny tech. <- actually, doctors and medical researchers, for all that they pursue shiny tech, still mostly have a good appreciation for "important principles". For example, I watched my surgeon count the sponges that they took out of my spouse after surgery, to make sure they hadn't left any in (this was a surprisingly common problem with surgeries). They don't try to make a "sponge detector machine". That's just one example out of millions; if you've followed doctors on Rounds, you'll see that most of what they do isn't technology.
But it's generally a bad idea to critique any of that. Gets one nothing but hatred. <- I explicitly wrote my comment to be friendly, fact-based, and make my thesis as clear as possible. If you're going around pointing out that "medical basics that we've known for a while matter", nobody is going to agree with you. But if you attack doctors/medical researchers the way you do, you instance cause people to interpret you as somebody to argue with.
All interventions that don’t require a patient being in a coma first requires patient compliance.
Diet and lifestyle are nearly impossible to address from the perspective of the Dr, as they require the patient to want to and be willing to change foundational elements of their life. Unless the patient comes to you requesting that, good luck.
Many will get actively angry if you mention even obvious and severe issues like obesity, to the point of slandering Dr’s or avoiding going in for serious issues later.
Taking a pill, patients will often be able to do that (but definitely not always!).
Getting a surgery, similar.
Is it any wonder that Dr’s use the tools that people will let them use, instead of the ones people won’t?
My dad was sent to a nutritionist to deal with his spare tire. She gave him a meal plan, and told him to come back in a couple months for a followup. When he did, she was shocked that he'd actually followed the meal plan, and his spare tire deflated the expected amount. He kept it deflated for the rest of his life, though he said it was a constant struggle.
Glad it worked for your dad and I can definitely see it working for others. Part of the difficulty with being told that I need to make life style changes is how open ended it can be. It can make it hard to plan or know what to do or stick with it. I can definitely see it makes the on ramp to life style changes so much easier if you're given a plan and have to think little about it. This helped me. My doctor recommended a diet and gave me sites with meal plans so while I was trying to navigate my way through it, I had resources to get me going the first few weeks.
Awesome stories! A key element in them (IMO) is in both of them the end result was recognized to be in the hands of the person doing it (not the Dr.), and they put the work in to accomplish it because they believed it was important for them. And that work was tractable.
For many people, the hard part is getting to the point they (mentally) are able to do that, which is the psychiatric and environmental elements IMO.
If someone has been convinced that they can't actually solve the problem, or that it is dangerous to solve the problem, they will avoid trying, or sabotage themselves when it looks like they could potentially succeed.
If someone is in an environment where it is impossible to get proper nutrition, or get time/space to try things or do the planning required, or where they aren't allowed to make the decisions necessary for a better outcome, then they'll also be frustrated and unable to make the change.
Luckily, both are addressable except in very rare edge cases, but it is rarely comfortable or easy to do.
I have no idea why both of these comments assume that I am talking about treating people with only diet and lifestyle. Other than possibly prejudice and people know more about me than they are willing to admit.
I'm talking about doctors having a tradition of being some of the wisest and most knowledgeable people in a community and going to someone's home allowing them to know things about their lifestyle by seeing it with their own eyes instead of taking a history and hoping the patient didn't lie or leave out something important.
And we've lost a lot of medical wisdom that will not be available should some version of the tricorder ever empower doctors to take MRIs, x-rays and similar diagnostics to the patient's home in their little black bag. Those traditions are gone, possibly forever, as an unintended side effect of placing tech on a pedestal above human wisdom.
I'm talking about diagnosing things effectively, not denying people drugs and surgeries. And I don't intend to reply further because from where I sit, it looks to me like HN is going out of its way to intentionally misunderstand me here lately.
I don’t know why you think I was saying only diet and lifestyle, because I definitely wasn’t?
You seem to be imagining that this scenario where the Dr. Goes to your home and gives you something useful was ever a mainstream phenomenon, which as far as I’m aware is just not true.
treating obesity like a lifestyle condition and not a bonafide disease and dismissing patients with vague recommendations to eat less and exercise more is ignoring the overwhelming body of evidence supporting that behavioral interventions don't work and that the body has strong homeostatic mechanisms which fight efforts to move the weight set point.
Because what you wrote sounds a lot like ‘it’s unsolvable’, which is clearly not true either.
Gastric restriction is an extreme solution, and rarely that effective long term.
Pharmacological is often fraught with serious side effects and also has poor long term efficacy.
In my experience it’s usually psychiatric and environmental, which is why it’s hard to tackle, and nearly impossible without patient willingness to change - and that’s always hard.
Even if willing to change, being able to change is often very difficult too as it’s rarely one factor.
Here is a 2022 critique of the narrow-mindedness of Evidence Based Medicine, by an Oxford professor and the author of what is perhaps the most widely used textbook on Evidence Based Medicine: https://ebm.bmj.com/content/27/5/253
"It is surely time to turn to a more fit-for-purpose scientific paradigm. Complex adaptive systems theory proposes that precise quantification of particular cause-effect relationships is both impossible (because such relationships are not constant and cannot be meaningfully isolated) and unnecessary (because what matters is what emerges in a particular real-world situation). This paradigm proposes that where multiple factors are interacting in dynamic and unpredictable ways, naturalistic methods and rapid-cycle evaluation are the preferred study design. The 20th-century logic of evidence-based medicine, in which scientists pursued the goals of certainty, predictability and linear causality, remains useful in some circumstances (for example, the drug and vaccine trials referred to above). But at a population and system level, we need to embrace 21st-century epistemology and methods to study how best to cope with uncertainty, unpredictability and non-linear causality [16].
In a complex system, the question driving scientific inquiry is not “what is the effect size and is it statistically significant once other variables have been controlled for?” but “does this intervention contribute, along with other factors, to a desirable outcome?”. Multiple interventions might each contribute to an overall beneficial effect through heterogeneous effects on disparate causal pathways, even though none would have a statistically significant impact on any predefined variable [11]. To illuminate such influences, we need to apply research designs that foreground dynamic interactions and emergence. These include in-depth, mixed-method case studies (primary research) and narrative reviews (secondary research) that tease out interconnections and highlight generative causality across the system [16, 17]."
The author of the article was in favor of face masks even in the early stages of the COVID pandemic. Meanwhile, the mask opponents concluded from the principles of evidence-based medicine that masks are not effective. During a pandemic, there is no time to wait for 10 years until sufficient evidence accumulates. If evidence-based medicine cannot be used in the case of pandemics, an improved methodology is needed.
Whether or not "guesswork that's probably scientifically informed" is in fact "improved methodology" is the question. As is using the fact that heads came up when you flipped a coin hoping for heads is sufficient grounds for applying the same method to much more consequential tasks. I for one am very glad we didn't abandon the supposedly far too cautious approach of evidence-based medicine when we properly tested the COVID vaccines, for example, before administering them to billions. That of course is far more consequential to ending the pandemic than masks ever were. But sure, for low-effort low-risk policies like "wear masks made out of whatever, it might do something", we can go with scientifically informed guesswork.
You can check the author ( Trisha Greenhalgh ) other articles:
"But there is a more fundamental—i.e. philosophical rather than methodological or practical—objection to the emphasis on RCTs to the exclusion of other kinds of evidence, and that is the assumption, based on what might be called naive empiricism, that data can be identified, collected, analysed and summarized without the need for theory. Academics in many other scientific disciplines emphatically reject the assumption that controlled experiments should always and necessarily over-ride mechanistic evidence, defined as evidence produced by multiple different methods which help illuminate and explain phenomena at a theoretical level [40,41]."
"Evidence-based medicine (EBM’s) traditional methods, especially randomised controlled trials (RCTs) and meta-analyses, along with risk-of-bias tools and checklists, have contributed significantly to the science of COVID-19. But these methods and tools were designed primarily to answer simple, focused questions in a stable context where yesterday’s research can be mapped more or less unproblematically onto today’s clinical and policy questions. They have significant limitations when extended to complex questions about a novel pathogen causing chaos across multiple sectors in a fast-changing global context. Non-pharmaceutical interventions which combine material artefacts, human behaviour, organisational directives, occupational health and safety, and the built environment are a case in point: EBM’s experimental, intervention-focused, checklist-driven, effect-size-oriented and deductive approach has sometimes confused rather than informed debate. While RCTs are important, exclusion of other study designs and evidence sources has been particularly problematic in a context where rapid decision making is needed in order to save lives and protect health. It is time to bring in a wider range of evidence and a more pluralist approach to defining what counts as ‘high-quality’ evidence. We introduce some conceptual tools and quality frameworks from various fields involving what is known as mechanistic research, including complexity science, engineering and the social sciences. We propose that the tools and frameworks of mechanistic evidence, sometimes known as ‘EBM+’ when combined with traditional EBM, might be used to develop and evaluate the interdisciplinary evidence base needed to take us out of this protracted pandemic. Further articles in this series will apply pluralistic methods to specific research questions."
"HIPPA has made it basically impossible to do science in Medicine without being part of the establishment"
I was able to get access to, and use, HIPPA data as a graduate student working in a field most of the "establishment" dismisses as either useless or confusing.
"The scientist responsible for the discovery and treatment of an issue, can't even legally practice."
I work with a half-dozen practicing physician-scientists.
What exactly is HIPAA’s role here? With or without HIPAA, medical institutions don’t want to share their data, especially not the ones with good data because they do their own research.
As someone more familiar with HIPAA than the average lay person (but far from an expert).
There's a lot that HIPAA does to both restrict access to data and ensure that's it's shared freely among institutions. It's kind of this weird open-secret system that works because people get fired for even minor offense.
I don't know what life was like before HIPAA, but after HIPAA it creates a lot of risk aversion. These seems to come out in two ways:
* Risk of identifiable information leaking via a published report. Anonymizing data is much harder than most people anticipate.
* Internal auditing and appropriate use. "Opening" a patient's record has to either be (1) for their care (2) an explicitly permitted use from the patient (research). That means while an institution must be a steward of their patient's data, they cannot simply use it however they want. This isn't much different AWS's handling of customer databases. While AWS keeps all of the data on their servers, they don't really have access to it.
> What exactly is HIPAA’s role here? With or without HIPAA, medical institutions don’t want to share their data, especially not the ones with good data because they do their own research.
I agree that OP's thesis doesn't make much sense.
HIPAA provides a legal framework for institutions to share data without liability[0]. It doesn't apply at all to non-"institutional" actors (in the way that OP seems to be using the word "institutional").
HIPAA is an extra cost for the big players, but there's little, if anything, that it prevents them from doing which they would otherwise want to do.
[0] More precisely, it provides a framework for capping liability exposure for institutions
As a rule of thumb, I find it's usually safe to dismiss criticism of HIPAA from anyone who misspells it as HIPPA. Not knowing that it's an acronym for Health Insurance Portability and Accountability Act (emphasis on portability there, as you pointed out!) tends to be a sign that you don't know what the hell HIPAA is or is for.
"HIPPA," on the other hand, is an ever-convenient boogeyman you can invoke in all kinds of situations.
You'd think it's a common typo, but I have literally only ever seen it spelled as "HIPPA" by people trying to use HIPAA as a punching bag for whatever happens to be their current pet issue. And those people almost always misspell it.
The number of anti-vaxxers citing "HIPPA" as the reason why the local concert hall was trampling on their civil rights by asking to see proof of vaccination was... extremely high.
Yes, this is almost certainly true. However, so far we've underutilized the data, for a number of mostly social/incentivization reasons, rather than scientific ones.
FHIR is an HL7 standard. While conforming to FHIR makes interchange easier, it doesn't guarantee that the clinical data is accurate, complete, or properly coded. Researchers typically still have to put a huge amount of effort into data cleansing, especially if they need to merge data sets from multiple provider organizations.
HIPAA (not HIPPA) actually makes data more portable, not less. It also only applies to a subsection of the medical research industry. A great deal of discoveries don't involve HIPAA-covered data at all.
> >HIPPA has made it basically impossible to do science in Medicine without being part of the establishment
I really don't follow this line of reasoning at all.
If anything, HIPAA makes it really hard for "the establishment" to do medicine. It actually doesn't apply to non-"establishment" actors at all, because they're largely independent entities that are not subject to HIPAA by law.
I am sure I miss the context but I guess the point is that authors were sick of hearing about the promise of statistical analysis and epidemiology which will outperform classical medical approach - slandering the BMJ several times for some reason.
Even today we see some hanger-ons who assert that their way is the best way - that they know from personal medical cases and their opinion and interpretation is more valuable than some statistical prediction.
Well, if I have understood their sentiment correctly; facts are facts. Today the best medical care in the world is driven by data-based adaption rather than subjective opinion. If a treatment has a robust statistical impact it will be preferred. If new methods produce outcomes no better than random noise then we must agree that they are not better. If the physician has anecdotes about why they think some trick works best then - in the words of Pearson - “statistics on the table, please”.
I am sorry to make the joke but it seems apt:
“Dr Charlatan disgruntled about needing evidence for science”.
Statistics have an inherent weakness when you don’t have consistent buckets to place everything into. If individual specialists classify cases differently they really can see statistically valid numbers from relatively few cases where double blind studies using different classification criteria see improvements below the noise floor.
Also, largely, there is no scientific evidence for the vast majority of problems.
The only evidential basis for causal analysis are experiments where (1) you intervene to bring about the cause deterministically; and (2) where you control all confounding causes.
Those conditions are impossible to meet for the vast majority of interactions in complex systems, such as people & medicine.
If you run associative statistics on counfounded data collected in observational studies, you may as well correlate star signs with outcomes -- no doubt people being treated under winter signs will do worse than those under gemini. Astrology QED.
Brings to mind an interesting tidbit I ran into--long ago your sign probably did correlate with meaningful things in life. Not that the stars mattered, but the seasons did. Early life nutrition would vary. Even today you'll find a correlation between signs and ADHD diagnoses--because of the school year. There is almost a year of difference between the oldest students entering school and the youngest--and those youngest are appreciably more likely to get an ADHD diagnosis.
as someone currently suffering from something I statistically have no real justification for getting fixed, I'm grateful I have a Dr who's going to fix it statistics be damned
In my case, I've been suffering from disc herniation induced sciatica for 5 months
When I say suffering, I mean it - I haven't been able to stand, sit, lie down, walk, do anything without constant and extreme pain. For months on end now.
There is a gold standard study called SPORT that proves more or less beyond all doubt that, in the long run, there is no difference in outcomes for patients who get disc herniations surgically removed vs those who don't. Disc herniations eventually heal themselves, all surgery does is fix them sooner than they otherwise would have.
So the tax-funded public health care system in Sweden and other similar countries do not surgically remove disc herniations other than in extreme cases. (loss of bowel control etc)
Thankfully, in the States, I'm able to have the surgery, so I will.
I couldn't care less if in the long run there is no difference in the aggreggate - I don't want to wait another year or two or more for the herniation to heal itself.
Uh, no. That study is literally the source of truth on that topic. There's no "to add to".
I hope you never have to experience what me and others who have the same problem have experienced, but I guarantee you you'd want to do the surgery too if you ever did.
In any case, I was happy to share my story but I have zero interest in debating it, so I'll check out here.
I've had constant back pain since my teens (and I'm 50) likely due to a damaged disk. I've already gone through this (I work in the medical field and can consult with doctors and medical researchers). I concluded that the risks of back surgery making a problem worse are high enough that I am going to live with the pain for the foreseeable future.
What are you talking about? The emergency procedures were used to get the new vaccines out quickly, but they have since completed the 'normal' procedures for approval.
Regardless of what the GP said, you broke the site guidelines here, which ask you to edit out swipes (like "What are you talking about"). Your comment would be just fine without that bit.
We've to ask you this kind of thing more than once before. If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
this is heavy handed moderation. I actually wanted to know what he's talking about, as in, to what approval process is GP referring? And when faced with what appeared to be actual vaccine misinformation, you decide to come down on my post instead? do you just not like my vernacular? Have a problem with my username?
"What are you talking about" is a common swipe and guaranteed to land as such. It isn't used in contemporary English to ask neutrally what someone is referring to. If that's what you wanted to do, there are plenty of other phrases to use.
> And when faced with what appeared to be actual vaccine misinformation, you decide to come down on my post instead?
By the time I saw the GP post, it had been replaced with "[deleted]", so it wasn't a question of "instead". (It's not a good practice to replace a comment with "[deleted]", but that's a different issue.) As for "misinformation", people mostly use that term to mean something they strongly disagree with. It isn't against HN's rules to be wrong. Possibly the GP did break HN's rules in some other way, but that's independent of whether you broke the rules (which you did). Users here need to follow the rules regardless of how bad another post is or you feel it is.
"(effect sizes) Some of our favorite medications, including statins, anticholinergics, and bisphosphonates, don’t reach the 0.50 level. And many more, including triptans, benzodiazepines (!), and Ritalin (!!) don’t reach 0.875."
"There’s no anti-ibuprofen lobby trying to rile people up about NSAIDs, so nobody’s pointed out that this is “clinically insignificant”. But by traditional standards, it is!"
I conclude that attempts to classify medical decisions as justified
or unjustified by scientific evidence have no foundation in logic and
that the term 'evidence-based medicine' is logically indistinguishable
from the term 'medicine'. The use of the term 'evidence-based medicine'
calls for a new type of authoritarianism in medical practice.
The ancient medical schools varied quite a bit when it came to questions of evidence. Something to think about perhaps.
>The Empiricists were sceptics in their attitude to causes, thinking that observation and report of evident conditions and their cures was sufficient for medical science, and thus eschewing causal theory. Rationalist doctors attacked the inadequate methodology of the Empiricists, and they try to explain why the antecedent cause should bring on such and such an effect, while emphasizing the need to decide rationally the proper use of induction, and the relevancy of similarities. The Methodists treat as irrelevant antecedent causes or factors, and they recognized just two pathological conditions—relaxation and constriction; the doctor's concern is to determine by direct observation in which state is the body.
Not really, people probably have better examples but doctors quite often don't actually try to figure out what antibiotic is the best, they just pick something for no real reason (they remember the name) and write a prescription. There are studies on what antibiotic to use quite a lot of the time, but that takes effort. You get a Z-Pack, and you get a Z-Pack, etc...
Probably one of the best known examples is the establishment as helicobacter pylori as the ultimate cause of ulcers, rather than stress, spicy food, and low pH. Marshall won the nobel prize for this, and it's often touted as a situation where there was a lot of initial non-scientific resistence "of course we know it's stress..."
homeopathic medicine, chiropractic medicine, folk medicine, traditional medicine, osteopathic medicine, crystal healing medicine... you can make an infinite list of nonsense
and if you want to disambiguate and say "evidence based medicine" to refer to actual medicine, they can just make it "evidence based homeopathic medicine" and so on and so on
when Western doctors visited China under Nixon, the great medical practices of Chinese medicine were on full view .. acupuncture, herbs, other parts.. The Western Doctors uniformly mocked the Chinese upon return.. oddly reminiscent of the Chinese mocking the "stone age" Tibetans at a later date. zero medicine anywhere but in your own guild, it seems
For the sake of clarity for those not inclined to read the linked article: Historically over the two millennia prior to Mao traditional medicine in China was a highly individual and idiosyncratic practice. While there were some aspects which were generally held in common the over all practice was incredibly diverse. It wasn't until Mao that these practices were assembled into the more unified and systematic Traditional Chinese Medicine that people are familiar with today.
curious how combative some people seem to get about homeopathics when placebo has been shown to have consistent positive effect, without side effects, for almost every illness
Yes, it's 'curious' how people with a sense of basic decency get 'combative' when the Alternative Medicine Industry lies to them to sell useless treatments for diseases that could actually be improved by real medicine. It really is odd how that works, isn't it?
That's a weird claim. Phenylephrine has a number of well documented and researched effects with demonstrable efficiency greater than that of a placebo. Now there are certainly people who have and do take it as a decongestant despite that particular usage having been shown to be ineffective, but that doesn't make it crap and it certainly doesn't place it in the same category as homeopathic remedies.
they aren't honestly trying to sell placebos (like Zeebo). They are marketing actual 'cures' for serious maladies that could respond to real treatments
If placebos don’t have side effects, why do the control groups in placebo-based trials always report side effects?
And given that “homeopathic” to most casually informed people is the same as “naturopathic”, what those people are taking are not necessarily harmless placebos.
As for your “almost every illness,” that’s just meaningless hyperbole.
Yes, absolutely, medicine should be evidence based. Yes, large randomized, double blind, placebo controlled studies provide a lot of information.
However, there are limitations with these kinds of studies.
First, it may not be ethical or practical to study some things in this manner. For example, antibiotics for bacterial pneumonia has not had a randomized, double blind, placebo controlled study.
Famously, there was an article discussing how parachutes in jumping out of airplanes had not been subject to a randomized, double blind, placebo controlled study. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC300808/
Later, somebody did that study: https://www.bmj.com/content/363/bmj.k5094 and found that parachutes made no difference, but it is not applicable to any real world case where you would use a parachute.
Which illustrates the second issue with evidence based medicine. Many times, the large trials's main thing they are measuring are different than what you really want to know, or the population they studied has significant differences from the patient who is right in front of you. How to apply the results of the large study to the individual patient in front of you is still more of an art than a science.
Finally, I think there is the example from machine learning. It has turned out that instead of creating more and more rules, feeding lots and lots of data to a neural network ends up performing better in a lot of machine learning cases. In a similar way, an experienced physician who has treated thousands of patients over decades has a lot of implicit knowledge and patterns stored in their (human) neural networks. Yes, these decisions should be informed by the results of trials, but they should not be discounted, which I think that Evidence Based Medicine did in at least a small part. During my residency, I worked with physicians who would examine and talk with a patient and tell me that something is not right and to do more extensive tests which would end up unearthing a hidden infection or other problem that we were able to treat before it caused major problems. They were seeing subtle patterns from decades of experience that might not even be fully captured in the patient's chart, much less a clinical trial without thousands of participants.
So yes, these clinical trials are a very important base for knowledge. But so is physician judgment and experience.
Source: I am a physician.