Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's the best way to handle updates?

I would like to switch to a dockerized setup, but running everything on Debian/stable has the advantage of unattended-upgrades (which has worked absolutely flawlessly for me for years across dozens of snowflake VMs). Not going back to manual upgrades.

I tried a Registry/Gitea/Drone.io/Watchtower (all on same host, rebuilding themselves too) pipeline and it worked, but felt patched together. Doing it wrong?



In-container upgrades seems to be an issue for many, we've had this issue at $work.

From what I've seen, the is no consensus on the "right" way to do it.

You could run the upgrades in the container and lose them when it's re-upped, or, you need to continuously deploy the containers.

This alone is one major reason I'm in favour of statically linked binaries in from-scratch containers, where possible.


If you are building everything on the same machine you could get rid of the registry and watchtower, mount the host machine docker socket into your pipeline, and build and start the image on the host machine from inside your pipeline.

I use dockerhub/github/drone/watchtower to automatically publish images and update services. I use dockerhub and github to avoid self-hosting the registry and version control. I also have about a dozen servers (as opposed to single-server in your example). This works really well for me and does not feel patchy. Yes there are moving parts, but fewer moving parts than a full-blown orchestration system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: