Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Docker uses iptables for port forwarding, and those rules typically end up ahead of the rules inserted by your firewall (firewalld/ufw/manual scripts/whatever).

It's not so much that they explicitly open a firewall rule, as that they take a networking path that isn't really covered by traditional firewalls.

Another way of viewing it is that Docker "is" your firewall for your container workloads, and that adding a port-forward is equivalent to adding a firewall rule. Of course, that doesn't change that public-by-default is a bad default.



This is right, I remember now - docker does mangle your iptables chains. I remember fighting with this a while back.

Terrible practice, in my opinion. Docker shouldn't be touching firewall stuff.


I've resorted to adding my own firewall rules to the 'raw' table, which pretty much preempts all the rules Docker or the distribution inserts.

It's not as powerful as the later tables in the chain (see https://upload.wikimedia.org/wikipedia/commons/3/37/Netfilte... ) but a lot more robust.


Iptables magic is essential to how a lot of container networking stuff is implemented, though.


This is (imho) a huge flaw in the concept of a "container". I don't think most people comprehend how much crap is going on in the background.

For most container purposes, host networking and the default process namespace is absolutely fine, and reduces a lot of problems with interacting with containerized apps. 95% of the use case of containers is effectively just a chroot wrapper. If you need more features, this should be optional. This would also make rootless federated containerized apps just work. But nobody wants to go back to incremental features if Docker gives them everything at once.


If you think that’s bad, wait til you see what the iptables-save output is like on an istio-proxy sidecar ;)


Kubernetes as well. We ran into instances where iptables contention was so bad during outage recovery that things just stalled. iptables-save looked like a bomb went off.


This has been a major pain point for me. Despite my `firewalld` configuration only allowing specific traffic, all my containers were exposed.

My current policy is to set `"iptables": false` in Docker's `daemon.json` on any public machine. I don't understand why this isn't the default.


> My current policy is to set `"iptables": false` in Docker's `daemon.json` on any public machine. I don't understand why this isn't the default.

If you don't muck with iptables then you need a (slow) userspace proxy to expose your pods. That also means losing things like the source IP address for any incoming connections.


Interesting, I haven't noticed any slowdown, but I am running fairly low traffic services.

I do see that RemoteAddr is from a private IP range. Luckily I'm not using this information anywhere, but good to know.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: