Hey everyone! I'm relatively new to building Kubernetes clusters and I'm a bit stuck with an issue I'm having. I have a local K8s cluster that I've set up with two nodes—one for the control plane and one for a worker. My setup includes a Python Flask backend (which I've also tested with Node.js) and a static HTML/JS frontend served through Nginx. The services are configured properly, with the backend using ClusterIP and the frontend set to NodePort.
Here's the issue: My backend pod starts off as Running, then goes to Completed, and ultimately ends up in CrashLoopBackOff status. When I check the logs for the backend, there's nothing there. Interestingly, the Flask app runs perfectly with Podman on the worker node and handles requests without issues.
However, the frontend pod does experience some restarts but eventually stabilizes. The main problem is that the frontend can't connect to the backend since it's not running. I've looked at the backend pod's description, and it shows a Last State of Completed with Exit Code 0, but still no crash trace in the logs—just totally empty output. I'm pretty sure my YAML isn't overly complicated; there's only a single container with the correct ports exposed and no health checks.
I suspect that the pod might be exiting cleanly after processing the POST request and that's why Kubernetes thinks it crashed. I have some questions: why does this pod exit cleanly and not remain alive? Also, why does it work fine with Podman but not in Kubernetes? Any guidance or files you might want to check?
5 Answers
Consider running your application with unbuffered output. Adding more log lines at the start and adding a long sleep at the end can help identify the issue. Once it’s working, start removing those adjustments to see what might have been causing the problems.
Are you trying to run two Express servers on the same port, 5000? Just want to clarify if that's happening since you showed both the frontend and backend.
Could it be a misconfiguration of the readiness probe? That’s something to double-check.
You haven't shared your Kubernetes manifest, right? I bet your pod is being killed due to a failing liveliness probe. Check to see if it's checking a different port than your app is listening to. You can verify this by running `kubectl get events` to view the pod events.
You might want to check the last logs before the crash using `kubectl logs --previous`. If the exit code is 0, it indicates your program is terminating correctly. You probably need to adjust your server logic so it stays running—your process shouldn’t exit unless it's really supposed to.
Related Questions
Can't Load PhpMyadmin On After Server Update
Redirect www to non-www in Apache Conf
How To Check If Your SSL Cert Is SHA 1
Windows TrackPad Gestures