Monitor PostgreSQL Performance with Built-in Tools

Introduction

Monitoring PostgreSQL doesn't require expensive third-party software. PostgreSQL ships with powerful built-in tools and statistics views that, when used correctly, provide the visibility you need to diagnose slow queries, I/O pressure, lock contention, and configuration issues. In this post I share practical techniques and SQL examples you can run with psql or any PostgreSQL client — useful for DBAs and developers responsible for production systems.

Why use built-in PostgreSQL monitoring?

Built-in monitoring is free, low-overhead, and works across cloud and on-prem deployments. It gives you immediate access to runtime activity, historical aggregated statistics and configuration-driven logging. Combine these sources to identify bottlenecks and drive targeted tuning.

Key built-in tools and views

  • pg_stat_activity — current sessions and queries
  • pg_stat_statements (extension) — aggregated per-query statistics
  • pg_stat_database, pg_stat_user_tables, pg_statio_user_tables — database and relation-level stats
  • pg_stat_bgwriter — checkpoint and writer activity
  • pg_locks — lock contention
  • EXPLAIN ANALYZE and auto_explain — query execution details
  • Server logs — slow queries and autovacuum output controlled by settings such as log_min_duration_statement
  • pg_buffercache (extension) — buffer contents and cache residency

Enable the important extensions

Two extensions are essential: pg_stat_statements for aggregated query metrics, and optionally pg_buffercache to inspect buffer residency. Enable them as a superuser and add pg_stat_statements to shared_preload_libraries in postgresql.conf.

-- in postgresql.conf (restart required)
shared_preload_libraries = 'pg_stat_statements'

-- then as superuser
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
CREATE EXTENSION IF NOT EXISTS pg_buffercache; -- optional

Practical queries and examples

Here are a few go-to queries I use during performance investigations.

1) Top time-consuming queries (pg_stat_statements)

SELECT query, calls, total_time, mean_time, rows
FROM pg_stat_statements
ORDER BY total_time DESC
LIMIT 10;

Use this to find queries that consume the most CPU/elapsed time. Check the query text, plan them with EXPLAIN (or EXPLAIN ANALYZE in a safe environment), and look for missing indexes or suboptimal joins.

2) Active sessions and blocking (pg_stat_activity and pg_locks)

-- Long-running active queries
SELECT pid, usename, now() - query_start AS duration, state, query
FROM pg_stat_activity
WHERE state <> 'idle'
ORDER BY duration DESC
LIMIT 20;

-- Find blockers and blocked
SELECT bl.pid AS blocked_pid, a.query AS blocked_query,
       br.pid AS blocking_pid, b.query AS blocking_query
FROM pg_locks bl
JOIN pg_stat_activity a ON bl.pid = a.pid
JOIN pg_locks br ON bl.locktype = br.locktype
  AND bl.database IS NOT DISTINCT FROM br.database
  AND bl.relation IS NOT DISTINCT FROM br.relation
  AND bl.page IS NOT DISTINCT FROM br.page
  AND bl.tuple IS NOT DISTINCT FROM br.tuple
  AND bl.pid <> br.pid
JOIN pg_stat_activity b ON br.pid = b.pid
WHERE NOT bl.GRANTED;

3) Cache hit ratio and I/O insights

SELECT datname,
       blks_hit, blks_read,
       ROUND(100.0 * blks_hit / NULLIF(blks_hit + blks_read,0),2) AS hit_percent
FROM pg_stat_database
ORDER BY blks_read DESC;

Low cache hit rates (well under 95–99% depending on workload) may indicate insufficient shared_buffers or dataset larger than available memory.

4) Table-level hotspots

SELECT relname,
       seq_scan, seq_tup_read, idx_scan, idx_tup_fetch
FROM pg_stat_user_tables
ORDER BY seq_scan DESC
LIMIT 20;

High seq_scan with low idx_scan suggests missing or unused indexes.

5) Checkpoints, write activity, and long checkpoint times

SELECT * FROM pg_stat_bgwriter;

High buffers_checkpoint and long checkpoints indicate checkpoint tuning is needed (checkpoint_timeout, max_wal_size). Frequent small checkpoints hurt throughput.

Using EXPLAIN and auto_explain

When you find an expensive query, run EXPLAIN ANALYZE (in a non-production copy or during off-peak if necessary) to see actual timings and row counts. For production, consider auto_explain to log slow plans automatically:

-- postgresql.conf (example)
auto_explain.log_min_duration = '500ms'
auto_explain.log_analyze = on
shared_preload_libraries = 'auto_explain'

This logs plans for queries slower than the threshold, helping capture problematic execution without manual intervention.

Reading server logs

Adjust logging to capture slow statements and useful context:

log_min_duration_statement = 1000  -- log statements > 1s
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d,app=%a'
log_checkpoints = on
log_autovacuum_min_duration = 0

Combine log analysis with pg_stat_statements to correlate slow events with aggregated metrics.

Interpreting the results — a short guide

  • High total_time in pg_stat_statements: focus on query plans and indexes.
  • High blks_read with low blks_hit: tune shared_buffers or use table partitioning.
  • Many idle-in-transaction sessions (pg_stat_activity): fix application transaction handling.
  • Frequent small checkpoints (pg_stat_bgwriter): consider increasing max_wal_size and checkpoint_timeout.
  • High temporary file creation: queries spilling to disk, tune work_mem or rewrite queries to reduce sorts/aggregations.
  • Lock contention in pg_locks: identify and optimize conflicting statements or add appropriate indexes.

Quick on-call checklist

  • Run the top queries from pg_stat_statements to spot recent heavy queries.
  • Check pg_stat_activity for long-running or idle transactions.
  • Inspect pg_stat_bgwriter and server logs for checkpoint and I/O issues.
  • Look for lock waits in pg_locks. Kill or optimize offending sessions as required.
  • Use EXPLAIN ANALYZE on suspects in a safe environment.
  • Adjust configuration (shared_buffers, work_mem, checkpoint settings) based on observed patterns, then measure again.

Monitoring is iterative: gather metrics, make a change, and measure the impact. Built-in PostgreSQL views and logging give you the raw data to do that without extra cost.


Ready to Transform Your Database Infrastructure?

Worlber helps companies across Saudi Arabia and the Middle East build reliable, secure, and cost-effective database solutions.

📧 Email us: contactus@worlber.com

🌐 Visit: worlber.sa/contact

📞 Call: Talk to our database experts today

Worlber - Your trusted PostgreSQL partner in the Middle East

Read more