Optimize Slow PostgreSQL Queries with EXPLAIN ANALYZE
Introduction
Slow queries are one of the most common operational headaches for PostgreSQL DBAs. EXPLAIN ANALYZE is the single most powerful tool you have for diagnosing and fixing performance issues: it runs the query and shows the actual execution plan, real row counts, and timing information. In this guide I’ll walk you through a step-by-step approach to using EXPLAIN ANALYZE to find and resolve slow queries, with practical examples and tuning tips you can apply to production systems (including managed environments like Worlber Carbonate).
Why EXPLAIN ANALYZE?
EXPLAIN alone shows the planner’s estimated plan and costs. EXPLAIN ANALYZE actually executes the query and reports the runtime statistics. That difference matters: the planner often makes incorrect cardinality estimates, which leads to suboptimal plans. EXPLAIN ANALYZE lets you see the real behavior and focus your optimization efforts.
Step-by-step workflow
- Reproduce the problem with a realistic test or in a safe environment (Quick Deploy or a replica from Carbonate are ideal).
- Run EXPLAIN ANALYZE with useful options: BUFFERS and VERBOSE where appropriate.
- Interpret the plan: look for sequential scans, high actual vs estimated rows, high startup/total times, and expensive nested loops.
- Try targeted fixes: add indexes, update statistics, rewrite queries, or tune planner settings and memory.
- Rerun EXPLAIN ANALYZE to confirm improvement.
Example: Finding a missing index
Assume a simple schema:
CREATE TABLE orders ( id serial PRIMARY KEY, customer_id int NOT NULL, status text, created_at timestamp);A problematic query:
SELECT id, status FROM ordersWHERE customer_id = 12345ORDER BY created_at DESCLIMIT 10;Run EXPLAIN ANALYZE:
EXPLAIN ANALYZE VERBOSE BUFFERSSELECT id, status FROM ordersWHERE customer_id = 12345ORDER BY created_at DESCLIMIT 10;Typical output you might see:
Limit (cost=0.43..12.34 rows=10 width=32) (actual time=45.123..45.789 rows=10 loops=1) Buffers: shared hit=1234 read=0 -> Sort (cost=0.43..23.45 rows=100 width=32) (actual time=45.122..45.745 rows=50 loops=1) Sort Key: created_at DESC Buffers: shared hit=1234 read=0 -> Seq Scan on orders (cost=0.00..10.00 rows=100 width=32) (actual time=0.123..2.456 rows=50 loops=1) Filter: (customer_id = 12345) Rows Removed by Filter: 99950 Buffers: shared hit=1234 read=0Planning Time: 0.345 msExecution Time: 45.812 msInterpretation: the planner chose a sequential scan and then a sort. Actual rows filtered are small but the table is large, so scanning is expensive. The reason: there’s no useful index for the ORDER BY on created_at combined with the WHERE clause. A composite index can avoid both the seq scan and the sort.
Fix: add a composite index
Create an index that matches the WHERE and ORDER BY:
CREATE INDEX ON orders (customer_id, created_at DESC);ANALYZE orders;Rerun EXPLAIN ANALYZE:
EXPLAIN ANALYZESELECT id, status FROM ordersWHERE customer_id = 12345ORDER BY created_at DESCLIMIT 10;New output (example):
Limit (cost=0.28..8.47 rows=10 width=32) (actual time=0.234..0.456 rows=10 loops=1) -> Index Scan using orders_customer_id_created_at_idx on orders (cost=0.28..1500.00 rows=100 width=32) (actual time=0.233..0.400 rows=10 loops=1) Index Cond: (customer_id = 12345)Planning Time: 0.200 msExecution Time: 0.512 msResult: execution time dropped from ~45 ms to under 1 ms. EXPLAIN ANALYZE proved the benefit and validated the index choice.
Common EXPLAIN ANALYZE signals and fixes
- Actual rows >> Estimated rows: cardinality estimates are wrong. Fix by updating statistics (ANALYZE), increasing statistics target, or creating operator classes/index types that better match data distribution.
- Sequential scan on a large table: consider a targeted index, partial index, or table partitioning.
- High sort cost: create an index that supports ORDER BY or increase work_mem for large sorts.
- Nested loop with large inner scans: consider a hash join, or add indexes on join keys; tune join_collapse_limit and from_collapse_limit only with caution.
- High buffer read times: investigate IO subsystem, consider increasing effective_cache_size, and check VACUUM activity and autovacuum settings.
Advanced EXPLAIN ANALYZE options
Useful variants:
- EXPLAIN (ANALYZE, BUFFERS) — shows buffer usage to identify I/O bound steps.
- EXPLAIN (ANALYZE, VERBOSE) — more detail about node properties.
- EXPLAIN (ANALYZE, TIMING OFF) — avoids per-node timing overhead for very fast queries in benchmarks.
Practical tips from production DBAs
- Use a realistic dataset. Small dev tables will hide performance issues. Quick Deploy makes it easy to provision realistic test environments.
- Run EXPLAIN ANALYZE on replicas or during low-traffic windows if the query has side effects. For non-SELECT statements, use EXPLAIN (ANALYZE) carefully.
- Combine EXPLAIN ANALYZE with pg_stat_statements to find frequently slow queries.
- Keep statistics current: autovacuum and ANALYZE are essential. For heavily updated tables, tune autovacuum or schedule manual analyze jobs.
- Consider configuration tuning: work_mem, shared_buffers, effective_cache_size, and random_page_cost affect planner decisions.
- Remember encryption and network layers. Worlber PGEE encryption is transparent but heavy encryption workloads can affect CPU — use EXPLAIN ANALYZE to isolate database-side cost vs transport overhead.
Conclusion
EXPLAIN ANALYZE is indispensable for optimizing slow PostgreSQL queries. The method is simple: measure, interpret, change, and validate. Start with the obvious fixes — indexes and statistics — then move to query refactors and configuration tuning. For teams in Saudi Arabia and the Middle East, managed platforms like Worlber Carbonate or Quick Deploy provide safe environments to test changes before deploying to production.
Ready to Transform Your Database Infrastructure?
Worlber helps companies across Saudi Arabia and the Middle East build reliable, secure, and cost-effective database solutions.
📧 Email us: contactus@worlber.com
🌐 Visit: worlber.sa/contact
📞 Call: Talk to our database experts today
Worlber - Your trusted PostgreSQL partner in the Middle East