Hacker Newsnew | past | comments | ask | show | jobs | submit | withinboredom's commentslogin

Thats basically how the web started. You can serve a ridiculous number of users from a single physical machine. It isn't until you get into the hundreds-of-millions of users ballpark where you need to actually create architecture. The "cloud" lets you rent a small part of a physical machine, so it actually feels like you need more machines than you do. But a modern server? Easily 16-32+ cores, 128+gb of ram, and hundreds of tb of space. All for less than 2k per month (amortized). Yeah, you need an actual (small) team of people to manage that; but that will get you so far that it is utterly ridiculous.

Assuming you can accept 99% uptime (that's ~3 days a year being down), and if you were on a single cloud in 2025; that's basically last year.


I agree...there is scale and then there is scale. And then there is scale like Facebook.

We need not assume internet FB level scale for typical biz apps where one instance may support a few hundred users max. Or even few thousand. Over engineering under such assumptions is likely cost ineffective and may even increase surface area of risk. $0.02


It goes much further than that.. a single moderately sized VPS web server can handle millions of hard-to-cache requests per day, all hitting the db.

Most will want to use a managed db, but for a real basic setup you can just run postgres or mysql on the same box. And running your own db on a separate VPS is not hard either.


Why? My router won’t even let me DMZ a single ipv6 device or open all ports to a single ipv6 device. It will only let me open one port at a time.

different routers have different options, but all of them have come with a pretty strong firewall out of the box, turned on by default, for the last 10 years.


Valgrind won’t show you leaks where you (or a GC) still holds a reference. This could mean you’re holding on to large chunks of memory that are still referenced in a closure or something. I don’t know what language or anything about your project, but if you’re using a GC language, make sure you disable GC when running with valgrind (a common mistake). You’ll see a ton of false positives that the GC would normally clean up for you, but some of those won’t be false positives.

Ghostty is written in Zig.

It will, but they will be abbreviated (only total amount shown, not the individual stack traces) unless you ask to show them in full.

They globally reset the privacy settings for pretty entertaining reasons. Every few years a post goes around saying crazy privacy things that gives you instructions to change your privacy settings to only “share with yourself”. If you’re dumb enough to do it, you basically shadow-ban yourself. If enough people do it, they have to change the settings back because also those same people will complain about their aunt/neighbor/dad/sister whatever not being able to see their posts and have no idea why.

I just honestly feel Zuck's lost the benefit of the doubt but I'm willing to be proven wrong

Um. 240 is a multiple of 60.

Yes, so you either get a strobe on/strobe off every two frames if you're in 60 Hz country, or a slower crawling flicker in 50 Hz land. Migraine-inducing either way. Also, your phone won't shutter at exactly 60.00/50.00 Hz (mains freq. is pretty stable, usually stable to at least the first decimal) so you'll see a jittered, jumpy phase drift on top of that.

Yep, and this breaks all sorts of computer vision setups. We had to compensate for it on the cameras that track the Oculus controllers, since folks are often playing under indoor lighting

Sometimes, you just need to know if an idea will even work or what it would look like. If you have to refactor half the codebase (true story for me once), it makes the change a much harder sell without showing some benefits. IE, it keeps you from discovering better optimizations because you have to pay the costs upfront.

In Rust, it's a lot easier to refactor half the codebase than it would be in another language. Once you're done fighting the compiler, you're usually done! instead of NEVER being sure if you did enough testing.

I can’t tell if you missed the whole point of “exploratory”…

I don't know either. Personally I can spend days or more on exploratory efforts that end up scrapped. My source code is usually version controlled, so I never have to worry about messing things up. But I suppose not everyone has this kind of time for stuff that isn't guaranteed to pan out.

Sometimes I will prototype an exploration in another crate or module so I can see if there are performance gains in a more limited application. Sometimes these explorations will grow into a full rewrite that ends up better than if I had refactored.


> Sometimes, you just need to know if an idea will even work or what it would look like.

I think what GP is trying to say is that the value of such exploration might be limited if you end up with something incompatible with "proper" Rust anyways.

I suppose it depends on how frequently "transition through invalid Rust while experimenting and end up with valid Rust" happens instead of "transition through invalid Rust while experimenting and end up with invalid Rust", as well as how hard it is to fix the invalid Rust in the latter case.


In my case, I was adding a new admin api endpoint, which meant pulling through a bunch of stuff that was never meant for the api and got in a fight with the borrow checker. For me, I just wanted to see if I broke something on a feature level (it was never meant to be exposed by the api after all), and I didn’t care about memory safety at that point. Refactoring it properly just to get memory safety just to see what would have broke, ended up breaking out of my time-box, so it never saw the light of a merge request. Had I been able to prove the concept worked, I would have opened a PR and against the open issue to find out the best way to refactor it “properly” into a right way. As it was, I would need to completely guess what the right way was without ever even knowing if the idea would work in the first place.

I guess that doesn't neatly fall into the categories I described, though I think it's closer to the former than the latter.

That being said, I think what you describe sounds like a case where relaxed checks could be beneficial. It's the eternal tradeoff of requiring strong static checks, I suppose.


Can't you usually just throw some quick referenced counted cells in there, to make the borrow checker happy enough for a prototype without refactoring the whole code base?

Your speakers do so that people's voices match their mouth movements. The speaker clocks need to be in-sync with the cpu clocks and they operate at different frequencies.


And if you have 3, group 1, group A, and group Alpha. Beyond that, just use colors.


Yeah. A disk failed, and I had to recreate the blog from whatever was still available via other means.


Nagles algorithm does really well when you're on shitty wifi.

Applications also don't know the MTU (the size of packets) on the interface they're using. Hell, they probably don't even know which interface they're using! This is all abstracted away. So, if you're on a network with a 14xx MTU (such as a VPN), assuming an MTU of 1500 means you'll send one full packet and then a tiny little packet after that. For every one packet you think you're sending!

Nagle's algorithm lets you just send data; no problem. Let the kernel batch up packets. If you control the protocol, just use a design that prevents Delayed ACK from causing the latency. IE, the "OK" from Redis.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: