I'm trying to figure out an issue with a static S3 website I set up. The index page loads perfectly, but I'm encountering 403 errors for all the other files it links to. I turned off 'Block All Public Access' and configured the bucket to act as a static website. I've disabled bucket ACLs and set the policy to allow s3:GetObject for everyone. However, even after waiting a while for logs to appear, I'm not seeing any records of the 403 errors in the logs, which is confusing because the public should have access to all files in the bucket. I am using the website endpoint, but I'm still unsure how to debug this further. Any suggestions?
3 Answers
You might want to consider using Origin Access Control (OAC) with CloudFront. The method of granting public read access for S3 websites is becoming outdated and isn't the best practice anymore.
Are you using React or a similar single-page application framework? It’s crucial to check what URL your network capture shows for the 403 errors. If you try to curl that link or use the AWS CLI to list the object, what do you get? Also, just a thought - could the other files be KMS-encrypted?
If you’re seeing a 403 in the browser and nothing in the bucket logs, it seems like something might be intercepting the request, like CloudFront, an NGINX server, or a load balancer. Make sure you're using the correct static website endpoint. For instance, the format should be: http://bucket-name.s3-website-region.amazonaws.com.

Related Questions
How to Build a Custom GPT Journalist That Posts Directly to WordPress
Cloudflare Origin SSL Certificate Setup Guide
How To Effectively Monetize A Site With Ads