Challenges with Using Temporary Tables in Aurora Serverless v2 Postgres

0
9
Asked By CuriousCat99 On

I'm exploring the idea of using temporary tables in an Aurora Serverless v2 Postgres setup to handle analytics data more efficiently. Our main database stores live procurement data, and we've transitioned from Redshift to using materialized views for our analytics. As we plan to add more filtering options for users, these queries could become quite complex and slow, especially with large datasets.

To speed things up, I'm considering creating temporary tables that would remain throughout a user's session. The catch is that since we're using Lambda for connections, those sessions close and remove the temporary tables at the end. My idea is to start a transaction that stores the transaction ID in DynamoDB so we can keep the temporary tables alive as long as the transaction is open. But I'm wondering how Aurora PG will handle multiple open transactions over an extended time with temporary tables. Any thoughts on this?

3 Answers

Answered By DataGuru42 On

This approach is more common than you’d think. I’ve seen financial companies use temporary tables that are set to delete after a certain period. Before going down this road, consider creating a read replica of your Aurora DB and running the analytics off that. It alleviates some of the load on the main tables. If that doesn’t suit your needs, proceed with your temp tables but try implementing range partitioning for easy deletion later. RDS Proxy could be useful for managing the database connections too!

QuestionMaster88 -

Thanks, but I plan to leverage PostgreSQL's temporary table functionality, so no need to handle table deletions manually.

Answered By TableTuner45 On

If you’re having performance issues with materialized views, don’t forget that you can index them to boost speed. Just be cautious about long-running transactions; they can create locking issues down the line. Make sure access to temporary schemas is controlled to avoid conflicts.

QuestionMaster88 -

Locking shouldn't be an issue since each account has its own schema with the materialized view, and I'm planning to populate the temporary table for just that user. Everything is carefully managed to prevent any overlap.

Answered By SQLWhiz On

It sounds like you're overcomplicating things with users querying a massive table directly. Perhaps a better solution would be to create a user-friendly interface for data exploration instead of giving them a raw SQL sandbox to play in.

Related Questions

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.