Sad news, but I'm not surprised with this. The complete ecosystem was "killed" (if that can be said) with K8s buzz and hipsterism (sorry guys, but I see K8s as Hadoop/BigData of modern days - a solution from a huge company that has no place in 90% setups). Alternatives like Deis [2] moved to K8s a long time ago. My favorite tool for some time, Rancher [3], did that as well.
I've been using Dokku [1] for a few years on a small setup, surprisingly without a single problem, taking into account it was written in "not-so-cool" bash. And I was considering Flynn as the next step if I need to scale it because Dokku doesn't have clustering support (added: looks like clustering support for Dokku is in work [4]).
After many checks, I got the impression Flynn simply wasn't there yet. Either because of low development pace, low number of supported appliances, or something else, I'm not sure. In the end, I picked up Ansible for more distributed installations.
Dokku has been a workhorse for me. For all the fears of “but what about redundancy?” I have run a fairly successfully service on it for five or so years on a single Vultr VPS and made more money from that than any other side project or all of them combined. Glad I directed my attention to the product and not the devops like I had in the past. 10/10 would recommend.
Absolutely this. While I'm mildly curious what will come of dokku + kubernetes (and maybe a little excited, though have tempered expectations), actually setting up some simple scripting that points to dokku means I can be fairly sure that when I've set up a dokku box, the databases are backed up, should the box go down and all the backups get fried, I can have a new box set upp they will be back up and running in much less than an hour from the point that I notice.
So the only reason I might want something like Flynn is in the case of unexpected uptick in load. And unfortunately, in my first experience with Flynn, things went bad and I wasn't able to restore from backups, which scared me off more than I was scared about load balancing (read: ran out of time for the experiments and vowed "one day I'll try again. Maybe."). That said, overall flynn did seem reasonably polished, so I'm sad about the announcement.
Maybe, some day, someone will take on this middle ground between Dokku and Kubernetes. But until then, Dokku has definitely proved itself.
I recently transitionned from capistrano deployments to CapRover and it's mostly working fine. The variety of ways to deploy apps was especially relevant to my situation.
My biggest complaint would be the non-zero downside deployment.
I moved us from Heroku to GKE a few years ago when we had ~4 engineers - Took a bit to get things figured out and get CI deploys etc working well, but it really wasn't all that complicated.
Honestly baffled at how many people are convinced you need hundreds of engineers before k8s can work well - they must be doing something very different from what we are. We did keep databases out of k8s (until recently when we added CockroachDB running inside the cluster) and only had ~6 services to move which may have kept things simpler. We're now scaling to run thousands of vcpu at peak times and a dozen different services, and it's still not all that hard to manage.
Can somebody who has had the opposite experience comment on what actually made it so difficult to implement? I imagine that if you are managing the cluster control plane yourself that will make things much more difficult - but unless you have some very specific requirement you can use a hosted k8s to reduce that.
If you use GKE, you have been spoiled. Other Kubernetes platforms - in particular EKS, which is what most people are going to use since most people use AWS - are way more work to setup and maintain. (Emphasis on the "maintain".)
Also, I think a lot of companies have really terrible devops practices that don't work in Kubernetes. So their move to Kubernetes includes a lot of extra work that isn't really caused by the move to Kubernetes.
Yeah, that's probably a major factor here - we moved to google cloud specifically for GKE and it's been pretty great. Have had a couple of issues (mostly with kube-dns autoscaling not keeping up, node-local dns helped a lot), but ultimately it's saved far more work than it's created.
Not everyone can run in the cloud, and of the ones who can there are a number of organizations who can't put their data in clouds that have to comply with US court orders.
Now if you are in Europe then Scaleway got a K8s offering not that long ago, but the rest of the world have to run that stuff local and running K8s securely on a locked down corporate network can be very complicated.
Sure, but the advice people generally give is that small teams and startups shouldn't use k8s - and those are the organizations who can most likely use one of the hosted k8s implementations (which are available anywhere AWS/GCP/Azure have datacenters). Larger corporates will have more complexity to work in with but also will have larger teams which can handle that.
Roughly how many person hours were needed to implement review apps on GKE? Do your GKE review apps auto-hibernate like Heroku review apps do when using AutoIdle (https://autoidle.com)?
We don't have review apps yet, we just have a few testing environments which devs can push changes they want to have available to. It is a really nice feature of heroku though so something it'd be nice to replicate. I think it'd probably take a week or so of dev time to do, the hardest part would be automating spinning up copies of all the dependencies we need.
I do this for a living (help companies migrate to k8s).
The advice I give everyone is: Stay off k8s until you care about binpacking. That is, making sure you're fully utilizing the instances you pay for. When the cost of your architecture is taking up some brain cycles, start digging in.
If that's low down on your priority list, it's not worth the investment. If you're reasonable considered "a startup", invest your time/money elsewhere. PMF and getting to default alive is far more important.
How do you suggest companies which don't need binpacking run their workloads, and is it really that much simpler than using k8s?
If you need automated deployments, centralized logging, autoscaling, etc which many teams do, then you're going to be dealing with a bunch of complexity anyway.
Honestly, package a container and run it Serverless. AWS Fargate, GCP Cloud Run, and similar are better fits.
There will come a time when cost of paying the overhead for a devops person (and eventually) team is worth it. At that point, k8s can be a great fit.
In my experience, that tends to be when you're at scale enough to care about costs a lot. Total spend and/or reducing COGS make it worth while. But when you look at it from the time an engineer costs, it's easier to see.
Are you gonna save 200k/year (minimum) in costs moving to k8s? Then do it. If you don't have line of sight to that, pay AWS/GCP to manage that for you, and focus on your business.
Also note, there's stages even with running k8s. Don't go all in running it all.
Start with a container, run it serverless.
When k8s becomes a better fit (to reduce costs, or with other small exceptions), use EKS or GKE. Don't run your own control plane.
If you really have a need for a lot of custom stuff, then start to run your own control plane. But by this team, you probably have a team managing all this. If that cost (remembering how expensive engineers are) is shocking, you should be running a different solution.
It’s not dependent on size but what you do. If you run a mono stack you initially spend less time on devops than when you run microservices. Anyone that tries to tell you otherwise hasn’t done a Heroku deploy lately. When your engineers spend enough time on devops to impact productivity is when you reach for a different solution. I can only see one other reason to start with microservices: you are developing something with much higher than usual security requirements and need an “air gap” between pieces of your system. Like if you store raw credit card numbers and want to separate their storage mechanism from the res rig the system with a really tightly controlled API.
Even so, K8 is a solution where running containers on a container as a service system is prohibitively expensive. The newfangled systems aren’t free in terms of operating costs and would you rather pay for extra hardware or for engineering time, knowing that hardware doesn’t take vacations or leave your company for a better job?
It is dependent on size. You could for example start with an engineering team working on a mono stack and outsource data and analytics operations. But at some point, you will decide that data and analytics have to be in-house. You start with a mono stack for each of them. But then you suddenly start splitting each of these three into sub-teams. Suddenly mono stack doesn't make all that sense.
K8s solves the problem of horizontal scaling in both teams and infrastructure. It's inevitable when (and only if) you plan on scaling up. Not that all teams/companies will/want to follow that path.
Is K8s the only way to deploy two different services that talk to each other? Why not something like AWS’s or Google’s or Microsoft’s container service? Why not a second app on Heroku or Google App Engine, etc?
If you have a team of 50 engineers but 40 of them work on the front end of your SPA, and the backend is simple CRUD, why do you need your own container infrastructure?
I stand by that the decision should be primarily based on how much of your team’s effort is diverted to devops. K8s is a devops solution, not a way to organize code or a development framework.
I'd take the alternate view. K8s is a huge relief from both development and operations, but it absolutely requires an entire team of dedicated folks along with re-tooling everything to fit it's paradigm. This is true of all orchestration systems, but especially true for k8s. If you have a dozen eng teams and 50 microservices, and 2 or 3 devops/SRE people because it's all in AWS/GCP/Azure, k8s is going to crash and burn. If you have a dozen eng teams with 2+ devops/SRE folks in each of them, and a handful of extra folks to form a whole new team for K8s you're in great shape.
We have multiple data centers and a similarly sized devops team (separate from SRE). They're exceptionally skilled and lord I pray they'll make it, because I know it won't be an easy journey.
None of our teams have dedicated devops engineers. Teams just have engineers that do everything: back end, front end, deployments, monitoring dashboards. Teams write run books for SRE. Dev Ops supplies eng teams tooling for deployments.
I've argued many, many times we spread our engineers too thin and we need greater specialization for better outcomes. We have enough engineers, but the problem is everyone owns their own thin but very tall vertical slice of the total system.
If everyone truly understands their thin and tall slice, please keep those people they are worth their weight in gold. If anything, it sounds like you have an SRE organization right now, and could use some extra developers and operators. :)
I've seen k8s managed by a couple of DevOps (among the rest of the infrastructure) and used in a company with 3-5 teams (and 200 microservices, but not that much code, they just drank the microservices kool aid).
That was a migration from running containers on EC2 (semi automated orchestration) to running and maintaining k8s on EC2.
Best infra experience I've seen in any company I worked at (and the company wasn't that great overall).
I also ran some smaller scale K8s by myself while doing eng management work, so I disagree K8s is as hard as people make it to be
We have 3 infra eng + manager for over a dozen devs and a bunch of services deployed on k8s (including all DBs). K8s itself is self managed (nothing is hosted) and has been the least of our concerns operations-wise.
We’ll probably grow that ratio quite a bit in the coming year or two but ownership is setup such that it shouldn’t be a proportional increase in our workload. K8s and some other tooling we use do make that considerably easier if your thread that needle right
> For a company that is a single eng team, it's obvious.
I'm not sure I get this. Are you saying that it obviously does or doesn't make sense?
My team is three people and k8s makes our lives easier than any alternatives I'm aware of. We used to be on Heroku, which is cool, until you need to do run anything other than a monolith or more secure than all publicly-accessible services.
May be it's less about kubernetes then about the market itself? If I don't want do any ops I also don't want to maintain a server and keep that safe, so I would go for Heroku instead of running that myself. And if you need more there are also managed k8s offerings which seem to be working fine for many small teams as well. For us it's basically a cheaper version of Heroku with a little bit more effort. We are actually using small managed k8s clusters from Digitalocean since nearly a year now and had zero issues with it so far. I really like not having to take care of the servers.
Dokku, Deis, Rancher, and finally Flynn. Another one falls. I've been around all these projects and small scale docker PaaS for almost a decade, and k8s has just killed them off, which is so sad. As you say, you don't always want to use k8s for a small three machine setup. I guess it was always going to happen, after containers stopped feeling cool, people stopped working on projects for them.
Re: Dokku, reports of our demise are greatly exaggerated.
We're still chugging along, and have recently added Cloud Native Buildpacks support[1], as well as integrations with Kubernetes[2] and Nomad[3]. Happy to hear where you found out that development on the project was halted, given that I made a release yesterday[4]...
> Dokku, Deis, Rancher, and finally Flynn. Another one falls.
Isn't Dokku alive and well? I don't use it but I read about it in some related research recently, and some people on this thread report to be happy users.
I've been using Dokku [1] for a few years on a small setup, surprisingly without a single problem, taking into account it was written in "not-so-cool" bash. And I was considering Flynn as the next step if I need to scale it because Dokku doesn't have clustering support (added: looks like clustering support for Dokku is in work [4]).
After many checks, I got the impression Flynn simply wasn't there yet. Either because of low development pace, low number of supported appliances, or something else, I'm not sure. In the end, I picked up Ansible for more distributed installations.
[1] https://dokku.com/
[2] https://deislabs.io/
[3] https://rancher.com/
[4] https://www.reddit.com/r/devops/comments/bgpw5w/flynn_vs_dok...