I've always made it a point to monitor the uptime of my websites, as it's crucial to know when they go down. Recently, I decided to take it a step further and implement content monitoring as part of the delivery process. This change came after a developer's deployment that broke our pricing page, which wasn't noticed until the client saw it on Monday, potentially losing them users over the weekend. Now, I set up a bot to check vital pages every 15 minutes. If there's any change, I get an email, and my team is notified via Slack. We even monitor specific elements because we've had issues where minor content changes affected SEO drastically. I'm currently using Playwright, Node.js, and AWS Fargate for this. I'm curious—do you guys automate this process too, or do you have another way of keeping track of everything?
4 Answers
No way I'd test in production! Everything should be tested beforehand. If you need to keep an eye on things post-deployment, telemetry is your best friend for monitoring performance.
I agree that monitoring during operation is essential! There can be all sorts of environmental issues affecting the site that might slip through testing before it goes live. We need to catch these problems before users notice.
I don’t really do what you’re doing. I integrate checks into the CI/CD pipeline and run automated tests before deployment. If anything fails the tests, it doesn't go live. That way, I can manage quality right from the start.
I prefer using automated UI testing scripts (like wdio) during the quality stage, and I also let customers do testing and approvals. It really helps catch issues early before going live.
Related Questions
Cloudflare Origin SSL Certificate Setup Guide
How To Effectively Monetize A Site With Ads