How to Monitor PostgreSQL Performance with Built-in Tools
Monitor PostgreSQL performance with built-in tools (no paid software needed)
As a senior PostgreSQL DBA I often get asked how organizations can monitor PostgreSQL performance without investing in commercial tools. This guide shows practical, production-tested techniques to monitor PostgreSQL performance with built-in tools and lightweight OS utilities — ideal for teams in Saudi Arabia and the GCC that must balance cost, PDPL requirements, and Vision 2030 cloud-first initiatives.
What you need and how to start
No licence, no third-party agent: use PostgreSQL's contrib modules (pg_stat_statements, auto_explain, pg_buffercache), core statistics views, the server log, and standard Linux tools (top, iostat, vmstat). Enable a few settings in postgresql.conf and you can begin collecting actionable metrics immediately.
Enable pg_stat_statements — your first port of call
pg_stat_statements aggregates per-query statistics and is indispensable for finding slow queries and hotspots.
# postgresql.confshared_preload_libraries = 'pg_stat_statements'pg_stat_statements.max = 10000pg_stat_statements.track = 'all'Then inside the database:
psql -c \"CREATE EXTENSION IF NOT EXISTS pg_stat_statements;\"\n-- Top queries by total time\npsql -c \"SELECT query, calls, total_time, mean_time\nFROM pg_stat_statements\nORDER BY total_time DESC\nLIMIT 10;\"\nThis quickly highlights queries that consume the most CPU or I/O.
Track activity, blocking and long-running queries
Use pg_stat_activity and pg_locks to find blocking chains and idle-in-transaction sessions that cause bloat and locking issues.
-- Active and blocking queries\nSELECT pid, usename, state, now() - query_start AS duration, query\nFROM pg_stat_activity\nWHERE state <> 'idle'\nORDER BY duration DESC\nLIMIT 20;\n\n-- Identify blocking relationships\nSELECT blocked.pid AS blocked_pid,\n blocked.query AS blocked_query,\n blocker.pid AS blocker_pid,\n blocker.query AS blocker_query\nFROM pg_catalog.pg_locks blocked_l\nJOIN pg_catalog.pg_stat_activity blocked ON blocked.pid = blocked_l.pid\nJOIN pg_catalog.pg_locks blocker_l ON blocked_l.locktype = blocker_l.locktype\n AND blocked_l.database IS NOT DISTINCT FROM blocker_l.database\n AND blocked_l.relation IS NOT DISTINCT FROM blocker_l.relation\n AND blocked_l.page IS NOT DISTINCT FROM blocker_l.page\n AND blocked_l.tuple IS NOT DISTINCT FROM blocker_l.tuple\n AND blocked_l.virtualxid IS NOT DISTINCT FROM blocker_l.virtualxid\n AND blocked_l.transactionid IS NOT DISTINCT FROM blocker_l.transactionid\n AND blocker_l.granted\nJOIN pg_catalog.pg_stat_activity blocker ON blocker.pid = blocker_l.pid\nWHERE NOT blocked_l.granted;\nUse EXPLAIN ANALYZE and auto_explain for query plans
Run EXPLAIN ANALYZE for specific problem queries. To capture slow queries automatically, use the auto_explain extension.
# postgresql.conf\nshared_preload_libraries = 'auto_explain'\nauto_explain.log_min_duration = '500ms'\nauto_explain.log_analyze = on\nauto_explain will write full plans to the server log when queries exceed the threshold — great for seasonal jobs or ad-hoc spikes.
Database-level counters: checkpoints, bgwriter, temp files
Core statistics views show background I/O, temporary file usage, and checkpoint behaviour that often explains poor performance.
-- Background writer and checkpoint stats\nSELECT checkpoints_timed, checkpoints_req, checkpoint_write_time, checkpoint_sync_time\nFROM pg_stat_bgwriter;\n\n-- Temp file usage (heavy temp usage = external sorts)\nSELECT datname, temp_files, temp_bytes\nFROM pg_stat_database\nORDER BY temp_bytes DESC\nLIMIT 10;\nIndex effectiveness and table bloat
Find unused indexes and identify tables needing VACUUM or REINDEX.
-- Index usage\nSELECT schemaname, relname, indexrelname,\n idx_scan, idx_tup_fetch\nFROM pg_stat_user_indexes\nJOIN pg_stat_user_tables USING (relid)\nWHERE idx_scan = 0\nLIMIT 20;\n\n-- Tuple statistics (dead tuples may indicate bloat)\nSELECT relname, n_live_tup, n_dead_tup\nFROM pg_stat_user_tables\nORDER BY n_dead_tup DESC\nLIMIT 20;\nBuffer and cache inspection
pg_buffercache helps you inspect buffer contents so you can see working set behaviour:
-- install contrib extension if needed\nCREATE EXTENSION IF NOT EXISTS pg_buffercache;\n-- Top objects by buffer usage\nSELECT relfilenode, count(*) AS buffers\nFROM pg_buffercache\nGROUP BY relfilenode\nORDER BY buffers DESC\nLIMIT 20;\nSystem-level metrics: CPU, I/O, memory
PostgreSQL performance is only half the story — OS metrics matter. Use these commands on Linux to correlate database activity with system pressure:
# top\n# vmstat 2 5\n# iostat -x 2 5\n# sar -n DEV 1 5\nSample quick-check from the server:
-- Check for high context switches or load with psql\nSELECT date_trunc('second', now()) AS ts,\n sum(blks_hit) AS blks_hit, sum(blks_read) AS blks_read\nFROM pg_stat_database;\nReal-time dashboards with psql and lightweight scripts
You can build a simple, real-time view without extra software using psql and watch (or a shell loop). Example: watch a top-10 slow queries list every 5 seconds:
watch -n 5 \"psql -c \\\"SELECT query, calls, total_time FROM pg_stat_statements ORDER BY total_time DESC LIMIT 10;\\\"\"\nOr a small cron-based alert that emails when replication lag or long-running transactions exceed thresholds.
When to consider more than built-in tools
Built-in tools are powerful, but large deployments or teams with strict SLA/observability needs may choose enterprise solutions. For Saudi organisations subject to PDPL and NCA frameworks, PGEE (PostgreSQL Enterprise Edition) offers hardened features like auditing and advanced support — useful when you require vendor SLAs and integrated compliance features. For most day-to-day performance work, however, the methods above are sufficient and cost-effective.
Closing: a practical checklist
- Enable pg_stat_statements and auto_explain.
- Regularly inspect pg_stat_activity, pg_stat_database, pg_stat_bgwriter.
- Correlate DB counters with iostat/vmstat and server logs.
- Automate snapshot queries and alerts via cron or simple scripts.
- Document baselines (busy vs quiet) to detect regressions quickly.
Monitoring PostgreSQL performance with built-in tools is a low-cost, high-value approach that aligns with cloud adoption under Vision 2030 and PDPL-aware operations across the region. Start with the queries above, collect baselines, and iterate — the result is reliable, observable PostgreSQL without paying for commercial monitoring.
Ready to Transform Your Database Infrastructure?
Worlber helps companies across Saudi Arabia and the Middle East build reliable, secure, and cost-effective database solutions.
📧 Email us: contactus@worlber.com
🌐 Visit: worlber.sa/contact
📞 Call: Talk to our database experts today
Worlber — Your trusted PostgreSQL partner in the Middle East
",
"meta_title": "Monitor PostgreSQL with Built-in Tools",
"meta_description": "Practical guide to monitor PostgreSQL performance using built-in tools and OS utilities—no paid software required. For DBAs in Saudi Arabia & GCC.",
"tags": ["PostgreSQL", "Monitoring", "pg_stat_statements", "Performance", "PGEE"],
"category": "PostgreSQL Tips",
"reading_time": "7 min read"
}