Docker uses iptables for port forwarding, and those rules typically end up ahead of the rules inserted by your firewall (firewalld/ufw/manual scripts/whatever).
It's not so much that they explicitly open a firewall rule, as that they take a networking path that isn't really covered by traditional firewalls.
Another way of viewing it is that Docker "is" your firewall for your container workloads, and that adding a port-forward is equivalent to adding a firewall rule. Of course, that doesn't change that public-by-default is a bad default.
This is (imho) a huge flaw in the concept of a "container". I don't think most people comprehend how much crap is going on in the background.
For most container purposes, host networking and the default process namespace is absolutely fine, and reduces a lot of problems with interacting with containerized apps. 95% of the use case of containers is effectively just a chroot wrapper. If you need more features, this should be optional. This would also make rootless federated containerized apps just work. But nobody wants to go back to incremental features if Docker gives them everything at once.
Kubernetes as well. We ran into instances where iptables contention was so bad during outage recovery that things just stalled. iptables-save looked like a bomb went off.
> My current policy is to set `"iptables": false` in Docker's `daemon.json` on any public machine. I don't understand why this isn't the default.
If you don't muck with iptables then you need a (slow) userspace proxy to expose your pods. That also means losing things like the source IP address for any incoming connections.
It's not so much that they explicitly open a firewall rule, as that they take a networking path that isn't really covered by traditional firewalls.
Another way of viewing it is that Docker "is" your firewall for your container workloads, and that adding a port-forward is equivalent to adding a firewall rule. Of course, that doesn't change that public-by-default is a bad default.