I'm currently using ECS Fargate to run a small Redis cache as part of my trials, but I've hit a snag with how CPU and memory are sized at the task level. Fargate has specific combinations of CPU and memory sizes that you have to choose from, which makes it tricky. My Redis usage is predominantly read-heavy, with peak CPU needs only around 0.5 vCPU, and I need about 5 GB of RAM for optimal operation since Redis often doesn't release memory back to the OS. Unfortunately, Fargate limits me to a maximum of 4 GB of RAM at 0.5 vCPU, meaning I'd have to allocate 1 vCPU just to get that extra RAM, resulting in paying for twice the CPU I actually need. Reducing RAM isn't an option as it could risk running out of memory due to Redis's memory management. I'm curious if anyone else has faced this issue? Do you just accept the additional costs, consider switching to ECS on EC2 for more flexible resource settings, or have you found another way to work with Fargate?
2 Answers
It sounds like switching to ECS on EC2 might be your best option if you want more control over the memory-to-vCPU ratio that fits your workload. Fargate has its pros, but the rigid sizing limits can be really frustrating for use cases like yours. Flexible sizing on EC2 could save you from over-provisioning and unnecessary costs.
I don't think the constraints are intentional malice—it’s likely just a more efficient model for AWS. Sure, you might be over-provisioning, but when you break it down, isn't it really just about an extra 12 bucks a month? If you're running a bunch of containers, that can definitely add up, but it could also be less hassle than constantly tweaking resource allocation. Maybe it's worth just absorbing the costs for simplicity.
Totally! For a few containers, those costs aren't huge, but managing large workloads could become a headache with EC2 too. Just needs a bit of weighing the pros and cons.
That makes sense! I guess it's about finding the right balance for your projects. I’m leaning toward EC2 as well for similar reasons.