I'm currently an SDET diving into Terraform and AWS, and I have a few questions after getting some demo stuff working. Here's what I'm curious about:
1. I'm thinking of using one S3 bucket per AWS account to store the Terraform state. I understand that the "key" plays a role in determining both the state file path and the LockID. If I define a backend in a file like `s3.tf`, will the LockID use just the key or a combination of the key and bucket name?
2. In relation to the previous question, what naming conventions do you recommend for state file keys? Would something like environment+project+terraform/state.tf be a good approach?
3. There's this chicken-and-egg situation with Terraform. What's the best way to manage this? Should I create a bootstrap `.tf` file? Or would it be better to manually set up the S3 bucket and then import it? What are the usual practices here?
4. As someone new to Terraform, what key resources should I focus on tracking? Currently, I'm working on the backend S3, Beanstalk (app and environment), and RDS.
3 Answers
For naming state files, I generally keep them in the same directory as the Terraform files in my repo. If you’re managing multiple repositories, just prefix them to avoid confusion.
I prefer to handle state and buckets in a dedicated AWS account. Make sure your naming is clear and systematic to prevent any conflicts. For instance, I call my state buckets `terraform-state-{{aws_account_id}}`. I’m not exactly sure how the individual files are named since I haven’t looked at them after the initial setup, but clarity is key!
Using a centralized single bucket for storing states is a good approach! This allows multiple roles within your AWS organization to access state files while confining them to their respective paths. I suggest a naming convention like `${AWS::AccountID}/${GitHub_org}/${GitHub_repo}/path/to/main/tf/${region}.tfstate`. For example, if you have a repo for Kubernetes services, the structure would look something like this: `123456789123/hashicorp/terraform-guides/self-serve-infrastructure/k8s-services/us-east-1.tfstate`. Just remember that this method requires KMS key permissions for everyone in the organization and proper bucket policies for secure operations.

Just to clarify, when you mention that the s3 block is commented out—don't you need that s3 backend block to initialize and create a local state first?