I'm working on a feature for an open source application and have it set up with Docker Compose. The job runner container I created updates files in a shared volume that multiple containers use.
Now I'm curious if the same approach will work in a Kubernetes (k8s) environment. My idea is to have a volume mount pushed to each node during deployment, which the pods on those nodes would then utilize. When it's time to update, I'd run the job runner on each node to modify the volume mount directly.
Currently, my workflow involves updating files stored on AWS S3, and all pods running a cron job to check for and download new files. However, I'm eager to eliminate the dependency on S3. Is this feasible?
1 Answer
It sounds like you're running into limitations with the volume types. If your volume has a ReadWriteOnce policy, then you'll be restricted to one node writing to it, which won't work for your use case. You might need to reconsider your volume strategy or explore other options like distributed file systems that can handle multiple writes across nodes.

What I'm aiming for is like having multiple Docker containers on a single machine accessing the same volume. I want all containers across different nodes to mount that folder, and when I modify the original folder, the change should be visible to all containers. I want to explore solutions without relying on an external service.