I'm working on a Bash script that creates a Docker network and a volume, builds several Docker images, and runs containers to generate and copy certificates. However, I want to ensure that if any step in this process fails, everything stops, and any changes made are rolled back. I've heard that I could put an if statement around each command to check for success, but that would make the script messy. Another option I considered is using `set -Euox pipefail`, but I've been warned that it could lead to unpredictable behavior. If someone interrupts the script with Ctrl + C, I need to know how to handle rollbacks too. What methods can I use to achieve a clean abort if a failure happens?
4 Answers
Consider changing the shebang to `bash -e` and using `set -o pipefail` to abort the script on any error. This way, if a command fails, it will stop the script right there. Just keep in mind that certain scenarios like variable declarations or boolean expressions won't trigger the abort. That said, it's almost exactly what you need!
Check out the `trap` command in Bash. It allows you to define cleanup actions when your script exits, which could help handle abrupt terminations well.
You can use a wrapper function for your Docker commands to check for failures. For instance, define a custom Docker function that exits if any command fails. It keeps your script cleaner. Just replace the Docker calls with this function to ensure that if something fails, it echoes an error message and stops execution.
But what happens if your network creation command fails? Will it still call that Docker function?
Also, make sure to use "$@" in your function instead of "$*" for better handling of command parameters.
Another method is to use `break` when you want to terminate loops or abort the whole script. It’s handy if you have any loops in your process.

It's better to just use `set -e` or `set -o errexit` rather than modifying the shebang. Way cleaner!