I'm diving into the connection limits for Aurora DSQL, and I have some concerns. According to the documentation, the maximum connections per cluster are set at 10,000. If, for instance, AWS Lambda scales up to 10,001 concurrent instances, does that mean a user will be unable to connect at all? Additionally, I found that the maximum connection rate is capped at 100 connections per second, which seems to be a problem since it's not configurable. This makes me question the scalability of DSQL, which seems contrary to the cloud's promise of flexibility. Since I haven't worked with RDS yet, I also wanted to know if DSQL supports RDS Proxy for connection pooling or if that's a non-starter?
5 Answers
And don’t forget, a connection does not always equate to a 'user'. If your queries are efficient, a lower connection limit can still handle a massive number of requests, especially if users are making sporadic calls. This might help bridge the gap slightly until AWS makes changes to support higher demands more readily.
Yeah, these limits can be pretty frustrating for serverless applications. The way DSQL is configured right now means you might hit those caps during busy periods unless you manage to pool connections better. Unfortunately, it seems like DSQL doesn’t support RDS Proxy either, which is a big deal since proper connection pooling would help manage a lot of those connection issues.
I totally understand your disappointment; it feels like a step back in terms of flexibility for cloud-native apps. If burst traffic is a major part of your workload, looking into other databases that handle concurrent connections better might be worth considering.
Totally agree, Aurora DSQL has some serious shortcomings. The strict connection limits and lack of connection pooling definitely hamper its usability for bursty serverless applications. If you’re considering alternatives, CockroachDB is getting a lot of buzz for its ability to handle numerous concurrent connections without needing an external proxy, plus its scalability is a big win.
It sounds like your expectations for cloud scalability might need to be adjusted a bit. AWS has constraints in place for a reason—usually to protect the infrastructure from being overwhelmed and to manage resources effectively. If your Lambda instances hit the 10,000 mark, then yes, a single additional user trying to connect would get throttled if all are in use.
As for the 100 connections per second limit, that's also a hard cap. They do allow for bursts of up to 1,000 connections per second temporarily, but the sustained limit really isn't suited for those big spikes in traffic you might be experiencing. On the plus side, if you're really hitting capacity often, you can request a quota increase to better fit your needs! Just know that they tend to review these requests carefully.
You can indeed request higher limits if you're finding the default ones too restrictive. AWS typically sets these very conservatively to avoid overload on their services, but once you provide a justification, they tend to be reasonable with increasing your limits. It’s always good to keep an eye on how high your traffic really is and request accordingly.

Related Questions
How to Build a Custom GPT Journalist That Posts Directly to WordPress
Cloudflare Origin SSL Certificate Setup Guide
How To Effectively Monetize A Site With Ads