I'm working in a medium-sized company facing some growing pains, especially when it comes to testing. There's a noticeable absence of testing almost everywhere, and I'm trying to nurture a culture that encourages and maybe even enforces testing practices. While I'm not an expert in this area, I've been reading quite a bit and experimenting with various approaches. I'm hoping to get insights from others on how they handle testing.
Right now, our architecture involves a series of API calls between multiple applications, and we have some backend services that process queues. My initial focus is on enhancing the API testing, as I've witnessed issues where features developed in one area end up breaking another because we weren't aware of the API dependencies involved.
Here's a rough outline of my current testing strategy, which I plan to automate:
- **PR Build/Test:**
- Execute unit tests
- Execute integration tests
- Execute consumer contract tests
- Launch the app with mocked dependencies in a container and run Playwright tests (I'm unsure whether this should happen here or post-deployment to a dev environment).
- **Contract Testing:**
- Trigger tests against the provider whenever consumer contracts change.
- Block deployments if contract testing fails.
- **Post Stage Deployment:**
- Execute smoke tests and comprehensive E2E tests against the staging environment.
- **After Production Deployment:**
- Conduct smoke tests.
I know that over time, we'll determine what's effective and what needs adjustment, but I'd love to hear how others approach their testing setups and where they think I might be missing the mark.
4 Answers
Speed is crucial here—if testing takes too long, it halts productivity. Start with a minimal set of tests that can reliably highlight issues across different apps. Consumer contract tests are especially beneficial because they are quick to run and can catch issues early, preventing problems from cascading across teams. Slow tests should be taken off the critical path, as failures in those can cost time and money. Ensure your team understands the quality of their test data and maintain backward compatibility where necessary.
Make sure to treat OpenAPI specs or contracts as the definitive source and use them as a gate for deployments. From my experience, keep pull requests focused and quick: run unit and integration tests early, then move to contract tests. Implement linting and spec checks to catch breaking changes on PRs. Spin up isolated environments for each PR for quick Playwright smoke checks, and reserve comprehensive E2E tests for the staging phase. For contract testing, use Pact for consumer-driven testing, ensuring both parties verify any changes to contracts before deploying. Managing queue messages like APIs can also be effective, with proper versioning and validation techniques.
One key thing to remember is that tests should ideally be written before implementing features; this way, they can help steer the development process. Writing tests after the fact can be a real struggle and often leads to a lack of ownership from developers. Additionally, any tests should run quickly—preferably while coding—so that developers don’t have to wait forever for results. It's important to selectively pick what to test; not everything can be covered, so focus on critical areas where tests will catch significant breakage. Don't strive for perfection; get the basics down first!
Honestly, if you’re in a large organization and the Engineering Manager isn’t pushing for quality control across teams, you might want to start looking for a new place. That sort of environment can be pretty indicative of deeper issues that could hinder your efforts in implementing testing solutions.

Related Questions
How To: Running Codex CLI on Windows with Azure OpenAI
Set Wordpress Featured Image Using Javascript
How To Fix PHP Random Being The Same
Why no WebP Support with Wordpress
Replace Wordpress Cron With Linux Cron
Customize Yoast Canonical URL Programmatically