Dealing with QA Test Bottlenecks in CI/CD: Anyone Found a Way?

0
1
Asked By TechWhiz42 On

We're facing a significant challenge with our CI/CD pipeline due to approximately 800 automated tests that take an average of 45 minutes to run, sometimes extending beyond an hour, particularly when resources are constrained. The lengthy run times are problematic enough, but the real issue lies in the flakiness of these tests, which leads to 5 to 10 random failures in each run, varying every time. This causes developers to rerun the pipeline in hopes it will pass on the second attempt, undermining the purpose of having these tests in the first place.

Our goal is to deploy multiple times a day, but the QA stage has turned into a major bottleneck. We're faced with the choice of either waiting for these sluggish tests to finish or risking missing critical regressions by ignoring failures. While we've attempted to parallelize our tests, we hit resource limits and have also tried to run only relevant tests per pull request, but that leads us to miss potential regressions. It feels like we're caught in a cycle of slow and unreliable testing. Has anyone managed to effectively resolve this type of problem? We need tests that are fast, reliable, and can identify genuine issues. I'm beginning to wonder if our entire testing strategy needs a rethink.

1 Answer

Answered By DebuggingDiva On

Honestly, having 800 tests sounds excessive. Do you really need all of them to run with every pipeline execution? Maybe consider reducing that number or categorizing the tests for less frequent runs. It might help with the overall pipeline speed.

CodeCrafter99 -

For sure! It would be interesting to know if all those tests are essential every single time.

Related Questions

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.