I'm using GitHub Actions along with AWS CodeBuild for continuous integration. In GitHub Actions, I run lint and compile checks, then in CodeBuild, I run my tests. The thing is, the project works perfectly well on my local Mac and in GitHub Actions, but tests in CodeBuild sometimes fail randomly. I get an error that includes something like 'EACCES' when trying to run a schema engine. I've set up my buildspec-test.yml file to manage CodeBuild, but I'm still facing these intermittent failures for the tests. They can sometimes pass or fail without any changes made to the code. Here's a quick rundown on what I tried already to fix it:I added binary targets, included wait log messages for global setup, and even switched the CodeBuild OS but to no avail. Any advice?
2 Answers
Sometimes, issues like this can occur if your CodeBuild container doesn't have specific VPC settings. Without proper authentication, you might be getting throttled by the repository services you're pulling from. I recommend trying to run your CodeBuild in your own VPC with an EIP, or log in to the repository as an authenticated user. Alternatively, consider using a private repository to store your libraries.
That makes sense. I've seen similar issues happen due to rate limiting associated with shared IPs. ? But, given that you're seeing EACCES errors, it sounds more like a file permission issue with the schema engine binary. Have you checked if the binary has the right permissions in your buildspec?
The EACCES error usually indicates a files permissions issue. It sounds like Prisma is attempting to execute a command that fails due to file access problems. I'd suggest checking if the schema engine binary is actually present at the time of failure, and whether it is executable. You could add commands to log the exact directory or permissions before running the tests. Here's a few troubleshooting steps you could take: check when the file is populated, log its existence, and confirm its execute permissions.
Thanks for your response! We managed to resolve the issue. We modified our buildspec-test.yml to pull the valkey image after the postgres image and also added a pnpm-workspace.yml file to include our specific dependencies.
Awesome, I'm glad to hear that worked out! Best of luck with your tests moving forward!

Thanks for the insight! We actually have a lot of test cases, and it's weird because sometimes they all pass, and other times a few at the top fail. It’s a mixed bag.