I'm working with Postgres version 17 and have a crucial UI query that's currently taking over 7 seconds to run. Our goal is to reduce this response time to around 1 second. I noticed that a full table scan of the 'orders' table and a nested loop with the 'event_audit_log' table are using up significant resources, so I want to know how to address this. I'm new to indexing strategies in Postgres and have some specific questions about creating suitable indexes:
1) For the 'event_audit_log' table, should I create a composite index on the columns (request_id, created_at, event_comment_text) or just a covering index on (request_id, created_at) with an "include" clause for event_comment_text? When should covering indexes be used in Postgres?
2) Regarding the 'orders' table, should I use a covering index on (entity_id, due_date, order_type) with an include clause for (firm_dspt_case_id), or just a composite index on (entity_id, due_date, order_type)?
3) For columns used as range operators like created_at or due_date, should they be the leading column in the composite index, or can they be non-leading?
4 Answers
It would definitely help to share the complete table schema using `d+

Here's the table definition I pulled from DBeaver: [link to schema]. With the order_type being predominantly TYPE_A and TYPE_B, it seems like a dedicated index could help narrow down your searches effectively. Also, if the 'due_date' index isn't utilized by the optimizer, maybe consider including the 'entity_id' in that index to enhance performance further.