I agree with the principle: log level error should mean someone needs to fix something.
This post frames the problem almost entirely from a sysadmin-as-log-consumer perspective, and concludes that a correctly functioning system shouldn’t emit error logs at all. That only holds if sysadmins are the only "someone" who can act.
In practice, if there is a human who needs to take action - whether that’s a developer fixing a bug, an infra issue, or coordinating with an external dependency - then it’s an error. The solution isn’t to downgrade severity, but to route and notify the right owner.
Severity should encode actionability, not just system correctness.
I can see the inspiration; But then again, how much investment will be required to verify the verifier? (it's still code - and is generated my a non-deterministic system)
No, but it's primarily because Meta has their own server infrastructure already. RSCs are essentially the React team trying to generalize the data fetching patterns from Meta's infrastructure into React itself so they can be used more broadly.
I wrote an extensive post and did a conference talk earlier this year recapping the overall development history and intent of RSCs, as best as I understand it from a mostly-external perspective:
Like I said above and in the post: it was an attempt to generalize the data fetching patterns developed inside of Meta and make them available to all React devs.
If you watch the various talks and articles done by the React team for the last 8 years, the general themes are around trying to improve page loading and data fetching experience.
Former React team member Dan Abramov did a whole series of posts earlier this year with differently-focused explanations of how to grok RSCs: "customizable Backend for Frontend", "avoiding unnecessary roundtrips", etc:
Conceptually, the one-liner Dan came up with that I liked is "extending React's component model to the server". It's still parent components passing props to child components, "just" spread across multiple computers.
Yeah the "just" is doing a lot of things, nobody asked for a react server but it turns out it could be the base for a $10B cloud company. Classical open source rugpull.
Still, even the most libertarian among us generally won't oppose restricting youth access to tobacco, or restricting recreational access to hard drugs.
That's the thing. We don't really ban "youth smoking". We ban sellers selling to youth. Who's accountable is everything in law.
Targeting platforms is like only banning one brand of cigarette. People will just find another. We should instead attack the "seller" here, being the algorithms optimized for selling and not for the enrichment of society.
So, considering there is a clear health issue with fast food and television, shall we ban them from having anything other than fruit and books (but not too complicated ones, we don't want them to get potentially suicidal ideas)?
You’re framing this as an all-or-nothing choice. The logical inverse of your argument would be: "should we unban hard drugs for everyone, and allow alcohol, tobacco, or porn for kids?"
That kind of binary framing doesn’t really move the discussion forward.
A more constructive approach is case-by-case. Different things sit at different levels of harm, and "ban everything" vs. "ban nothing" isn’t a workable model for society.
You know, I am in a country that allows alcohol for children (in different intensities, e.g. beer at age 14 with parents present, age 16 in the supermarket, age 18 for the hard stuff). As it turns out, our kids are alright.
Tobacco and porn have been more strongly regulated lately. In my teenage years, they were easily available to anyone with coins in their hands. Turns out: that didn't destroy us either.
The first beer, the first pack of strong tobacco (Rothändle, the dirtiest, hardest stuff), the first tiddie magazine from the railway station kiosk, those were rites of passages. It was a way for teenagers to push the envelope, realise alcohol makes you wobbly, tobacco causes diarrea (believe me, that Rothändle stuff was more chemical weapon than 'smooth'), and ultimately, all women look about the same undressed, so it is pointless to keep buying. They were small, recoverable mistakes that taught teenagers where their limits were.
Now we have banned all that away - but the teenage urge to self-realization and rebellion found a new way to social media. And: social media is safer: no-one got lung cancer from TikTok. No-one woke up in a hospital for facebook poisoning.
Ultimately, it is the rebellion the fascists dislike, not the fact that people earn money with it. So we ban that, driving teenagers to ever-more-destructive behaviour.
Teenagers need an outlet to be teenagers without living in a state sanctioned panopticum. If society pathologizes every form of adolescent experimentation, if you let control freaks raise your children, do not be surprised if they turn out to be either actual rebels, or something much, much darker.
"In 2015, 9.3% of high school students reported smoking cigarettes in the last 30 days, down 74% from 36.4% in 1997 when rates peaked after increasing throughout the first half of the 1990s"
It’s already a solved problem-
load a digital ID into a wallet app, the operating system can then perform a zero knowledge proof for each website that the user is over 16. The government issuing the ID doesn’t know which websites it’s being used for and the website only gets a binary yes/no for the age and no other personal info:
How does this solve the problem of both governments and corporations wanting to implement this in ways that allow them to hoard datasets?
As it stands, the government in the US uses an identity verification vendor that forces you to upload videos of multiple angles of your face, enough data for facial recognition and to build 3D models, along with pictures of your ID.
I use Tor, so I get to see how age verification is implemented all over the world. By large, the process almost always includes using your government issued ID and live pictures/videos of your face.
There are zero incentives to implement zero knowledge proofs like this, and billions of dollars of incentives to use age verification as an opportunity to collect population-wide datasets of people's faces in high resolution and 3D. That data is valuable, especially for governments and companies that want to implement accurate facial recognition and who have AI models to train.
Nothing "solves" the problem of governments wanting to collect data on you. Governments will likely always want this, until we start caring about the issue enough to elect ones that don't.
The important point is that such invasive approaches are not required; clearly, however people already authenticate with government agencies for getting a driver's licence or passport would suffice. I think it's the responsibility of knowledgeable tech people to advocate for this.
Most being the operative word. In human-centric bureaucracies, people who don't have ID (for whatever reason: religious conviction, a feud with the relevant government agency, a legal status the computer system was never designed to represent) can still access services in many cases. Naïvely computerising everything will effectively remove rights from those whose paperwork doesn't check out.
ID verification is a universal hammer, to which all problems look like nails, but we shouldn't be so quick to reach for it. Not all of its downsides can be solved with cryptography.
Controlling access to any substance is a long process, and the motives aren’t always clear at the beginning.
I’m not sure why Australian policymakers chose to take this step now, but regardless of the motive, it feels like a meaningful starting point. Social media’s engagement-driven echo chamber model has contributed to a deeply divided world, and governments stepping in can at least make parents’ jobs a little easier.
I can think of a couple of times when I wasn’t the best at something, yet still got opportunities simply because someone well-established in that space liked me.
And I can think of the opposite too - situations where I was at a disadvantage because someone higher up just didn’t like me.
For me, it’s more or less balanced at a balance at this point of time. But most people around me, I don’t think they’ve been as lucky.
Ha, Physics majors get the same talk about law school. It's just the selection bias of selecting for people willing to make hard pivots filtering out the under-achieving, go-with-the-flow types.
i really think this is part of the pitch deck for bun's funding. that a bigger company would acquire it for the technology. the only reason an AI company or any company for that matter would acquire it would be to:
I have no experience in this area, so I’ll just ask a noob question:
Can we make it so that if someone is looking at me through smart-glasses without my consent, my glasses respond with some form of interference that gives them a tiny headache?
And if I do grant someone consent to record me, I can just turn my glasses off.
And of course, my glasses don’t record anything, so they wouldn’t be hurting my own eyes.
This post frames the problem almost entirely from a sysadmin-as-log-consumer perspective, and concludes that a correctly functioning system shouldn’t emit error logs at all. That only holds if sysadmins are the only "someone" who can act.
In practice, if there is a human who needs to take action - whether that’s a developer fixing a bug, an infra issue, or coordinating with an external dependency - then it’s an error. The solution isn’t to downgrade severity, but to route and notify the right owner.
Severity should encode actionability, not just system correctness.
reply