Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think I want to use kubernetes (or anything that uses it) again. Nightmare of broken glass. Back in the day Docker Compose gave me 95% of what I wanted and the complexity was basically one file with few surprises.


If you can confidently get it done with docker-compose, you shouldn't even think about using k8s IMO. Completely different scales.

K8s isn't for running containers, it's for implementing complex distributed systems: tenancy/isolation and dynamic scaling and no-downtime service models.


One of the problems seems to be that most moderately complex companies where any one system would be fine with Compose would want to unify their operations, thus going to a complex distributed system with k8s. And then either your unified IT/DevOps team is responsible for supporting all systems on k8s, or all individual dev teams have to be competent with k8s. Worst case, both.


no-downtime is table stakes in 2025. I can't look at anyone in the eyes and tell them that our product is going to go down for a bit everytime we deploy (it'd also be atrocious friction for frequent deployment).


You can do no-downtime deploy of a web service with:

- Kamal

- Docker compose with Caddy (lb_try_duration to hold requests while the HTTP container restarts)

- Systemd using socket activation (same as Docker compose, it holds HTTP connections while the HTTP service restarts)

So you don't have to buy the whole pig and butcher it to eat bacon.


> - Systemd using socket activation (same as Docker compose, it holds HTTP connections while the HTTP service restarts)

Nit: it holds the TCP connections while the HTTP service restarts. Any HTTP-level stuff would need to be restarted by the client. But that’s true of every “zero downtime” system I’m aware of.


Being successful enough that any amount of downtime is an existential risk is a great problem to have. 99.99% don't have that problem; even huge successful businesses can survive unplanned downtimes (see: recent major outages).

It's far from table stakes and you can absolutely overengineer your product into the ground by chasing it.

"0 downtime" system << antifragile systems with low MttR.

Something can always break even if your system is "perfect". Utilities, local disasters, cloud dependencies.


Docker Compose still takes you 95% of what you need. I wish Docker Swarm survived.


What happened to it?

I'm still using it with not a single issue (except when is messes up the iptables rules)

I still confidently, upgrade the docker across all the nodes, workers and managers and it just works. Not a single time that it caused an issue.


Docker the company bet big on Swarm being the de facto container orchestration platform for businesses. It just got completely overshadowed by k8s. Swarm continues to exist and be actively developed, but it’s doomed to fade into obscurity.


For some reason I assumed it was unsupported. That doesn't seem to be the case.


The original iteration of Docker Swarm, now known as Classic, is deprecated. Maybe you were thinking of that?


As I read more about it, yes, that is indeed the case.


> I wish Docker Swarm survived.

I heard good things about Nomad (albeit from before Hashicorp changed their licenses): https://developer.hashicorp.com/nomad

I got the impression it was like a smaller, more opinionated k8s. Like a mix between Docker Swarm and k8s.

It's rare that I see it mentioned though, so I'm not sure how big the community is.


I’d wager that like half the teams (at least) using kubernetes today should be using Nomad instead. Like the team I’m on now where I’m literally the only one familiar with Kubernetes and everyone else only has familiarity with more classic EC2-based patterns. Getting someone to even know what Helm does is its own uphill battle. Nomad is a lot more simple. That’s what I like about it a lot.


For better or for worse its a orchestrator (for containers/scripts/jars/baremetal) full stop.

Everything else is composable from the rest of the hashicorp stack consul(service mesh and discovery),vault(secrets) allowing you to use as much/or as little as you need and truly able to scale to a large deployment as needed.

In the plus column , picking up its config/admin is intuitive in a way that helm/k8s never really comes across.

Philosophy wise can put it int the unix way of doing things - it does one thing well and gets out of your way , and you add to it as you need/want. Whereas k8s/heml etc have one way or the high way - leaving you fighting the deployment half the time.


Mitchel Hashimoto was a genius when it came to opinionated design and that was Hashicorp's biggest strength when it was part of their culture.

It's a shame Nomad couldn't overcome the K8s hype-wagon, but either way IBM is destroying everything good about Hashicorp's products and I would proceed with extreme caution deploying any of their stuff net-new right now...


> I wish Docker Swarm survived.

Using it in prod and also for my personal homelab needs - works pretty well!

At the scale you see over here (load typically served on single digit instances and pretty much never needing autoscaling), you really don't need Kubernetes unless you have operational benefits from it. The whole country having less than 2 million people also helps quite a bit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: