I'm looking for some guidance on building a vehicle detection and classification pipeline using AWS. As a complete beginner with AWS, I plan to take one image per second, send it to the cloud, and perform batch inference—real-time analytics aren't necessary for this project. I'm very budget-conscious and would appreciate any implementation examples, tutorials, or blogs that could help. I've gotten some advice on reducing costs, specifically: 1. Using a tarball from the NVR to S3 for inference, 2. Opting for Amazon spot instances, and 3. Utilizing Graviton instances over standard EC2. Any additional insights would be greatly appreciated!
5 Answers
What kind of compute resources are you looking at? It might be helpful to provide some specs like memory and CPU requirements for better recommendations.
Yeah, I agree! It's better to dig into the topic yourself instead of relying on AI. It might not have all the details you need to make this project successful.
Just a heads-up, Graviton is essentially a processor for EC2, so it's not really interchangeable with it. Your approach will depend on how large your tarball is and how long it takes to process. If your processing time is under 15 minutes, consider using AWS Lambda. You can set up a trigger with S3 for when the tarball is uploaded, and have Lambda handle the processing. But if it takes longer than that, ECS with spot instances could be the way to go, unless you need fault tolerance. ECS is generally easier for someone new to manage, and you might want to look into Fargate for even less hassle.
Honestly, I find these "I asked AI what to do" posts a bit off-putting. It's important to invest some time into your own research, especially since AI can miss important nuances.
Graviton discussions often come down to whether you want x86 or ARM processors. Keep that in mind when making your choice.

If you're leaning toward ECS for ease, definitely check out Fargate. It abstracts a lot of the infrastructure management away, making your life easier.