Hey everyone! I've recently integrated Karpenter into my EKS cluster, and I've noticed that Karpenter keeps the nodes it creates alive even when they are running specific pods like cloudwatch, fluent-bit, and others. My question is: how can I configure Karpenter to allow the deletion of nodes that have these daemonsets running? I understand that these will always create pods on each node, but when I look through the documentation, I can't find any guidance on this issue. I'd really appreciate any tips or insights you could share! Here's a quick overview of my node pool setup:
```
resource "kubectl_manifest" "karpenter_node_pool" {
...
}```
Thanks a lot!
1 Answer
It looks like your node pool configuration could be influencing Karpenter's behavior. You should check if your settings for empty node termination are correct. For instance, if the wait time for a node to be considered 'empty' is too long, that could prevent it from being deleted. Sharing your node pool YAML could help identify any issues!
I just added my node pool configuration to the original post. I believe I found a potential snag—previously, I had set some startup taints which Karpenter didn't remove. Now I realize that this is likely why my nodes aren't being deleted! Would love to hear your thoughts on that.