Hey everyone! I have a setup that includes an API Gateway, a Lambda service, and an Aurora PostgreSQL database, which uses triggers for modifying data. I'm planning to implement a Redis cache to store data for specific devices, allowing me to access this information without hitting the database each time the Lambda function runs. My main question is: how should I write values to the Redis cache from the database? Should I use an AWS Lambda extension for this? When a trigger updates data in the database, can I use that extension to update the cache as well? Or is there a better solution that's more efficient? Thanks for your help!
5 Answers
It sounds like you’re aiming for a standard level 2 cache setup that keeps your primary datastore unaware of it. Also, your database already handles a lot of caching on its own, so tightly coupling them might not be necessary. Consider integrating your cache layer into your data access layer. If you haven't thought about it yet, using DynamoDB instead of Redis might be a better idea for cost-effectiveness and easier management.
Caching at the application level rather than directly in the database could be the way to go. Just keep in mind that invalidating the cache properly can be a challenge.
You might want to check out this whitepaper on database caching strategies with Redis. It gives a lot of free advice on different methods you can implement.
I've never tried writing to the cache directly from the database. My approach would be to set up a 'wrapper' Lambda function that calls your original Lambda, gets the response, and then caches the data. This keeps your caching logic independent from the response behavior, making it easier to manage changes.
Honestly, I don't see why you'd want to write to Redis directly from the database. It feels unnecessary to me.
Definitely! Lazy loading or using a write-through strategy when modifying data sounds like a smart way to go.