Hey folks! I'm curious about how the slow_query_log works with Aurora MySQL when log_output is set to 'file'. Do the slow query logs get written to the local disk first before being sent to CloudWatch, or are they logged directly to CloudWatch? Additionally, I'm wondering if having this feature on a busy system could affect storage I/O performance. Any insights?
3 Answers
Just to add on, writes to CloudWatch Logs are indeed API calls. So, while the logs are first written locally, they eventually go to CloudWatch. You might want to monitor I/O performance if you’re running a heavily threaded system.
When you enable log_output=file for Aurora MySQL, the slow query logs are initially written to the local storage of the database instance. Then, they get pushed to CloudWatch via the CloudWatch agent or integration. So yes, this can increase your I/O usage on a heavily active system, potentially impacting performance. Just make sure to keep an eye on your disk and CPU metrics!
Based on the documentation, it seems that logs are indeed written directly to CloudWatch, which means your local storage wouldn't be impacted. You can check [this link](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.CloudWatch.html) for further details!

Thanks for the resource! I noticed there wasn't a clear mention of whether the logs go directly to CloudWatch without hitting local storage, so it’s good to see this documentation.