I've inherited an application that uses DynamoDB for data storage, but we're running into throttling issues because of write capacity unit (WCU) limits. We have a straightforward schema with five columns, and one column, which I'll call 'items', receives frequent updates every 10-15 seconds for a few hours at a time. Because of the heavy updates, we consistently hit the WCU limit, even with on-demand DynamoDB.
My boss wants to transition to a different database. I'm considering three options: Amazon Aurora, Amazon Keyspaces, and Valkey. Which one would be best suited for my needs?
- I need to update many rows regularly for short bursts of time.
- Only one column is getting updated often.
- We're facing throttling due to WCU limits in DynamoDB.
- Data needs to be retained for a month.
I'm still learning about backend technologies, so I apologize if I've missed anything important!
2 Answers
Keyspaces is effectively a Cassandra API built over DynamoDB, so it won’t address your throttling issue. And while Valkey might be usable, it feels like using a pogo stick for a long commute—why take the harder route? It seems like there might be a schema design issue; I'd be happy to dive deeper into that to help you troubleshoot.
It sounds like you might be overupdating one item instead of making independent updates across rows. If you’re updating frequently in such a narrow window, you may need to rethink your configuration.
Honestly, DynamoDB should still work for your use case if configured properly. You might have some misconfiguration with your table. You could also consider implementing table rotation for different sets of data. It's unlikely that you’re maxing out DynamoDB’s capabilities as it can handle a significant load if set up right.
I totally agree! You can up your WCU to meet your requirements, and once your process is complete, you can scale down. Just remember: in on-demand mode, scaling takes time! Pre-warming could help you maintain capacity when you start your batch process. Check out some resources on that.
It's really about optimizing DynamoDB before jumping ship. Plus, if you consider other services like Valkey or even Keyspaces, you may still face some capacity management challenges.

Here's how my data is set up: I have 'eventId' as the partition key and 'tenantId' as the sort key with 'eventStatus' and 'items' (a string of about 7KB). Each 'eventId' could have many rows depending on active tenants. I’m updating the ‘items’ set every 10-15 seconds, but still hitting issues.