I'm running a significant number of deployments (350) on an AWS EKS cluster and utilizing the S3 CSI driver to mount S3 directories into each pod, mainly so the JVM can write heap dumps during `OutOfMemoryError`. This setup has been economically favorable because S3 storage is cheap. However, with the recent update to the v2 S3 CSI driver, I've observed that it creates intermediate Mountpoint pods in the `mount-s3` namespace—one for each mount—which results in approximately 500 additional pods in our cluster, consuming valuable VPC IP addresses. This IP usage is becoming a concern as our scale continues to grow. I'm seeking strategies to minimize the pod/IP footprint of S3 CSI or alternative methods to get heap dumps into S3 without incurring excessive IP overhead.
3 Answers
Using S3 CSI for heap dumps feels a bit heavy-handed. Have you thought about using an emptyDir and setting up a DaemonSet at the node level to send the dumps to S3? You could even push straight to S3 using the SDK and skip all the extra pods. It could simplify your architecture a lot!
Honestly, if you're worried about IPs, consider just assigning a larger subnet. It's a straightforward fix! And uploading files to S3 is just a single HTTP request away. A simple curl command can handle it in a sidecar without needing all the extra complexity from the CSI driver.
Love that point! Not everything has to be over-engineered—just keep it simple!
It sounds like you've hit a snag with the number of pods! But you might want to consider how small your subnets are, as having 450 extra IPs can definitely be a concern. One suggestion is to look into mountpoint sharing, which could help reduce the number of pods you need. There's some documentation on it that could guide you through restructuring things a bit to alleviate the issue.
Sounds like a lot of pods for this setup. I'm not sure if that's a sustainable approach long-term.

I thought S3 CSI was the easiest option, but managing custom sidecars for many deployments can get really cumbersome.