Hey everyone! I've been facing some frustrating CI failures due to the new Docker Hub rate limits, especially since our shared IP often hits the anonymous pull limit. We've considered a few solutions like paying for Docker Pro, but that felt wrong for an infrastructure issue. Self-hosting options like Harbor or Nexus seem too complicated for our small team. So, I created a free caching mirror called RateLimitShield, which doesn't require any sign-up and handles the authentication for us, allowing our runners to avoid hitting that limit.
To use RateLimitShield, you only need to adjust your Docker daemon's configuration by changing the `daemon.json` file to include the mirror URL. Since implementing this, our builds have stabilized substantially. I'm curious if anyone else has tackled this issue and would also love some feedback on my approach. Thanks!
5 Answers
Yeah, I wouldn’t go for this either. Just stick with what you know. I've seen too many of these 'quick fixes' not work out well in production settings.
I appreciate the effort, but I wouldn't trust a service like this for production images without pulling the exact digest. It's great for basic stuff, but I wouldn't rely on it for critical workloads.
If you're looking for a solid solution, ECR has pull-through caching capabilities that might be worth checking out. It’s pretty reliable and integrates well.
Using something like Artifactory to proxy Docker Hub works really well. You can set up access that's more tailored to your needs without the risk of going with an unknown service.
There's no magic solution here. Proxy caches like the ones you mentioned can work, but they come with their own risks and limitations. Just be cautious.
A quick workaround could be using Google’s container registry mirror like `mirror.gcr.io`, it still gives you some options without needing to trust a new service.