I have a Bash script running on AWS EC2 that orchestrates a PostgreSQL dump using Docker. It works fine when everything goes smoothly, but if something fails—like if a container already exists or there's a connection issue—Docker containers get left running in the background. I need help figuring out how to send an email with the error message when something goes wrong and also shut down those dangling containers. The goal is to capture error messages from each step and provide details about what went wrong in the email notification.
3 Answers
To keep error messages from your commands, try redirecting stderr to a log file or a variable. For instance, you can use something like `your_command 2>&1 | tee -a error.log` which will display the error/output and also write it to a log file. Then, you can read from that log file in your email function.
You can achieve this by ensuring that your cleanup function captures the exit code of the previous commands. Use `trap 'cleanup $?' EXIT` to execute cleanup on exit, and inside your cleanup function, refer to the exit code to determine whether you should send an email or take other actions.
Exactly! This way, you can check the exit code and understand what went wrong, allowing you to include specific details in your email.
Sounds like you might want to use a `trap` command to catch errors in your script. By using `trap`, you can run a cleanup function when your script encounters an error, which can then send the email and stop the containers. It's a great way to handle errors! Check out the Bash `trap` command for more details.
I noticed you mentioned a trap in your script. If you want to send the error message inside the `cleanup` function, you can capture it in a variable. For example, you can use `trap 'cleanup $? "Error happened at line $LINENO"' ERR` to pass the error code and a message.

That approach sounds reasonable! Just make sure to check the contents of the log file right before you send the email, so you capture the latest errors.