Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One thing that seems missing from a lot of these comparisons is the base rate of success for dieting itself.

Most people who “start a diet” never meaningfully lose weight in the first place, or lose a small amount and plateau quickly. The cohort of “dieters who regain weight” is already heavily filtered toward the minority who were unusually successful at dieting to begin with. That selection bias matters a lot when you then compare regain rates.

GLP-1s change that denominator. A much larger fraction of people who start the intervention actually lose substantial weight. So even if regain after stopping is faster conditional on having lost weight, the overall success rate (people who lose and keep off a clinically meaningful amount) may still be higher than dieting alone.

In other words: “people who regain weight after stopping GLP-1s” vs “people who regain weight after dieting” ignores the much larger group of dieters who never lost anything to regain. From a population perspective, that’s a pretty important omission.



You are the third person to mention that the cohort is "dieters who regain weight".

Reading the article and its referenced study, I thought the cohort was "all who were included in the non-placebo group of the RCT" and that the average was taken over all such subjects.

I've tried, can't find any evidence to the contrary. I am wrong and missing some key claim in the study? I would appreciate if you could support your claim.


You're right.

> Weight regain data are expressed as weight change from baseline (pre-intervention) or difference in weight change from baseline between intervention and control for randomised controlled trials. When analysing and presenting data from all studies, we used weight change from single arm trials, observational studies, and the intervention groups from randomised controlled trials. When analysing data from randomised controlled trials only, we calculated the difference in weight change between the intervention and control groups at the end of the intervention and at each available time point after the end of the intervention. When studies had multiple intervention arms, we treated each arm as a separate arm and divided the number in the comparator by the number of intervention arms to avoid duplicative counting.19

https://www.bmj.com/content/392/bmj-2025-085304


If you look back at the pre-Ozempic era, you had articles like this:

https://www.vox.com/2016/5/10/11649210/biggest-loser-weight-...

> The results at year eight are heartening. Eight years later and 50.3 percent of the intensive lifestyle intervention group and 35.7 percent of the usual care group were maintaining losses of ≥5 percent, while 26.9 percent of the intensive group and 17.2 percent of the usual care group were maintaining losses of ≥10 percent.

The idea of "heartening" by an obesity doctor was that half of people lost a largely imperceptible amount of weight.

This was considered success at the time.

For comparison, to be on the edge of normal weight from the edge of obese is a 16% reduction.


It's because taking a drug requires zero willpower


This is not true. You have to procure it and take it consistently over a long period of time, there are side effects, and some people really dislike needles.


I was concerned about this too. Gemini informed me that the researchers "found that even when comparing people who had lost the same amount of weight, the rate of regain was significantly faster in the drug group (GLP-1s) than in the diet group (approximately 0.3 kg/month faster)."

Also, both groups contained those who didn't lose weight. They did not omit dieters who failed to lose weight or those who weren't "super responders."


I apologize in advance for the tone of my response.

>Gemini informed me...

Phrases like this are essentially, "I asked an LLM to interpret this and I didn't bother verifying it's accuracy, but I will now post it as fact."


Contrast this with taking the headline as fact without further scrutinizing it, which happens often. Or, look at the other posts here that are assuming that the cohort was restricted to only those who lost weight.

In an informal conversational context such as a forum, we don't expect every commentator to spend 20 minutes reading through the research. Yet we now have tools that allow us to do just that in less than a minute. It was not long ago that we'd be justified to feel skeptical of these tools, but they've gotten to the point where we'd be justified to believe them in many contexts. I believed it in this case, and this was the right time spent/scrutinization tradeoff for me. You're free to prove the claim wrong. If it was wrong, then I'd agree that it would be good to see where it was wrong.

Probably many people are using the tools and then "covering" before posting. That would be posting it as "fact". That's not what I did, as I made the reader aware of the source of the information and allowed them to judge it for what it was worth. I would argue that it's actually more transparent and authentic to admit from where exactly you're getting the information. It's not like the stakes are that high: the information is public, and anyone can check it. Hacker News understandably might be comparably late to this norm, as its users have a better understanding of the tech and things like how often they hallucinate. But I believe this is the way the wind is blowing.


>Yet we now have tools that allow us to do just that in less than a minute.

With this tool, you read in under one minute what would've taken you 20 minutes before?


I'm not sure exactly what you're asking. What I meant was that, for example, before you might've needed to track down where to find the underlying research paper, then read through the paper to find the relevant section. That might've taken 20 minutes for a task like this one. Now you can set an LLM on it, and get a concise answer in less than a minute.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: