Hey everyone! I recently deployed a DaemonSet related to the nginx CVE-2025-1974, which you can check out [here](https://blog.abhimanyu-saharan.com/posts/ingress-nginx-cve-2025-1974-what-it-is-and-how-to-fix-it). The strange part is that while this DaemonSet modifies iptables rules for containers inside it, these changes seem to impact the entire Kubernetes cluster. Can someone explain how this works? I logged into the nodes via SSH, expecting to see changes in the iptables there, but I'm not seeing the deny rule. Also, what would happen if I remove the DaemonSet? Thanks a lot!
3 Answers
Kubernetes uses Linux Namespaces for container isolation. By default, every Pod has its own Network Namespace. If a Pod modifies iptables rules, it only affects its own Namespace. However, if you use `spec.hostNetwork: true`, the Pod shares the host's Network Namespace, and any iptables changes would affect the entire worker node. That's why your Pod's IP matches the worker node's IP in this case, similar to how kube-proxy operates with iptables.
Just to add, the DaemonSet is modifying iptables rules in the node's network namespace, not directly on the host. So SSH'ing into the node might not show those changes. If you remove the DaemonSet, it will revert the iptables changes. Be cautious, though; your services may depend on those rules!
Got it! That makes sense now, thanks!
The DaemonSet runs with host network privileges, so when it sets iptables rules, it's actually affecting the host itself. That's why you see changes across the whole cluster!
Thanks! I thought it was strange that the iptables on the nodes didn't appear changed.
Thanks, that's a super clear explanation!