Another useful tool for monitoring database activity is the pg_locks
system table. It allows the database administrator to view information about the outstanding locks in the lock manager. For example, this capability can be used to:
View all the locks currently outstanding, all the locks on relations in a particular database, all the locks on a particular relation, or all the locks held by a particular PostgreSQL session.
Determine the relation in the current database with the most ungranted locks (which might be a source of contention among database clients).
Determine the effect of lock contention on overall database performance, as well as the extent to which contention varies with overall database traffic.
Details of the pg_locks
view appear in Section 51.74. For more information on locking and managing concurrency with PostgreSQL, refer to Chapter 13.
On most Unix platforms, PostgreSQL modifies its command title as reported by ps
, so that individual server processes can readily be identified. A sample display is
(The appropriate invocation of ps
varies across different platforms, as do the details of what is shown. This example is from a recent Linux system.) The first process listed here is the master server process. The command arguments shown for it are the same ones used when it was launched. The next five processes are background worker processes automatically launched by the master process. (The “stats collector” process will not be present if you have set the system not to start the statistics collector; likewise the “autovacuum launcher” process can be disabled.) Each of the remaining processes is a server process handling one client connection. Each such process sets its command line display in the form
The user, database, and (client) host items remain the same for the life of the client connection, but the activity indicator changes. The activity can be idle
(i.e., waiting for a client command), idle in transaction
(waiting for client inside a BEGIN
block), or a command type name such as SELECT
. Also, waiting
is appended if the server process is presently waiting on a lock held by another session. In the above example we can infer that process 15606 is waiting for process 15610 to complete its transaction and thereby release some lock. (Process 15610 must be the blocker, because there is no other active session. In more complicated cases it would be necessary to look into the pg_locks
system view to determine who is blocking whom.)
If cluster_name has been configured the cluster name will also be shown in ps
output:
If you have turned off update_process_title then the activity indicator is not updated; the process title is set only once when a new process is launched. On some platforms this saves a measurable amount of per-command overhead; on others it's insignificant.
Solaris requires special handling. You must use /usr/ucb/ps
, rather than /bin/ps
. You also must use two w
flags, not just one. In addition, your original invocation of the postgres
command must have a shorter ps
status display than that provided by each server process. If you fail to do all three things, the ps
output for each server process will be the original postgres
command line.\
PostgreSQL has the ability to report the progress of certain commands during command execution. Currently, the only commands which support progress reporting are ANALYZE
, CLUSTER
, CREATE INDEX
, VACUUM
, and BASE_BACKUP (i.e., replication command that pg_basebackup issues to take a base backup). This may be expanded in the future.
Whenever ANALYZE
is running, the pg_stat_progress_analyze
view will contain a row for each backend that is currently running that command. The tables below describe the information that will be reported and provide information about how to interpret it.
pg_stat_progress_analyze
ViewColumn Type
Description
pid
integer
Process ID of backend.
datid
oid
OID of the database to which this backend is connected.
datname
name
Name of the database to which this backend is connected.
relid
oid
OID of the table being analyzed.
phase
text
sample_blks_total
bigint
Total number of heap blocks that will be sampled.
sample_blks_scanned
bigint
Number of heap blocks scanned.
ext_stats_total
bigint
Number of extended statistics.
ext_stats_computed
bigint
Number of extended statistics computed. This counter only advances when the phase is computing extended statistics
.
child_tables_total
bigint
Number of child tables.
child_tables_done
bigint
Number of child tables scanned. This counter only advances when the phase is acquiring inherited sample rows
.
current_child_table_relid
oid
OID of the child table currently being scanned. This field is only valid when the phase is acquiring inherited sample rows
.
initializing
The command is preparing to begin scanning the heap. This phase is expected to be very brief.
acquiring sample rows
The command is currently scanning the table given by relid
to obtain sample rows.
acquiring inherited sample rows
The command is currently scanning child tables to obtain sample rows. Columns child_tables_total
, child_tables_done
, and current_child_table_relid
contain the progress information for this phase.
computing statistics
The command is computing statistics from the sample rows obtained during the table scan.
computing extended statistics
The command is computing extended statistics from the sample rows obtained during the table scan.
finalizing analyze
The command is updating pg_class
. When this phase is completed, ANALYZE
will end.
Note that when ANALYZE
is run on a partitioned table, all of its partitions are also recursively analyzed as also mentioned in ANALYZE. In that case, ANALYZE
progress is reported first for the parent table, whereby its inheritance statistics are collected, followed by that for each partition.
每當執行 CREATE INDEX 或 REINDEX 時,pg_stat_progress_create_index 檢視表的每一行會列出目前正在建立索引的每個後端程序。下表描述了其所回報的資訊,以及有關如何解釋的說明。
pg_stat_progress_create_index
ViewColumn Type
Description
pid
integer
Process ID of backend.
datid
oid
OID of the database to which this backend is connected.
datname
name
Name of the database to which this backend is connected.
relid
oid
OID of the table on which the index is being created.
index_relid
oid
OID of the index being created or reindexed. During a non-concurrent CREATE INDEX
, this is 0.
command
text
The command that is running: CREATE INDEX
, CREATE INDEX CONCURRENTLY
, REINDEX
, or REINDEX CONCURRENTLY
.
phase
text
lockers_total
bigint
Total number of lockers to wait for, when applicable.
lockers_done
bigint
Number of lockers already waited for.
current_locker_pid
bigint
Process ID of the locker currently being waited for.
blocks_total
bigint
Total number of blocks to be processed in the current phase.
blocks_done
bigint
Number of blocks already processed in the current phase.
tuples_total
bigint
Total number of tuples to be processed in the current phase.
tuples_done
bigint
Number of tuples already processed in the current phase.
partitions_total
bigint
When creating an index on a partitioned table, this column is set to the total number of partitions on which the index is to be created.
partitions_done
bigint
When creating an index on a partitioned table, this column is set to the number of partitions on which the index has been completed.
initializing
CREATE INDEX
or REINDEX
is preparing to create the index. This phase is expected to be very brief.
waiting for writers before build
CREATE INDEX CONCURRENTLY
or REINDEX CONCURRENTLY
is waiting for transactions with write locks that can potentially see the table to finish. This phase is skipped when not in concurrent mode. Columns lockers_total
, lockers_done
and current_locker_pid
contain the progress information for this phase.
building index
The index is being built by the access method-specific code. In this phase, access methods that support progress reporting fill in their own progress data, and the subphase is indicated in this column. Typically, blocks_total
and blocks_done
will contain progress data, as well as potentially tuples_total
and tuples_done
.
waiting for writers before validation
CREATE INDEX CONCURRENTLY
or REINDEX CONCURRENTLY
is waiting for transactions with write locks that can potentially write into the table to finish. This phase is skipped when not in concurrent mode. Columns lockers_total
, lockers_done
and current_locker_pid
contain the progress information for this phase.
index validation: scanning index
CREATE INDEX CONCURRENTLY
is scanning the index searching for tuples that need to be validated. This phase is skipped when not in concurrent mode. Columns blocks_total
(set to the total size of the index) and blocks_done
contain the progress information for this phase.
index validation: sorting tuples
CREATE INDEX CONCURRENTLY
is sorting the output of the index scanning phase.
index validation: scanning table
CREATE INDEX CONCURRENTLY
is scanning the table to validate the index tuples collected in the previous two phases. This phase is skipped when not in concurrent mode. Columns blocks_total
(set to the total size of the table) and blocks_done
contain the progress information for this phase.
waiting for old snapshots
CREATE INDEX CONCURRENTLY
or REINDEX CONCURRENTLY
is waiting for transactions that can potentially see the table to release their snapshots. This phase is skipped when not in concurrent mode. Columns lockers_total
, lockers_done
and current_locker_pid
contain the progress information for this phase.
waiting for readers before marking dead
REINDEX CONCURRENTLY
is waiting for transactions with read locks on the table to finish, before marking the old index dead. This phase is skipped when not in concurrent mode. Columns lockers_total
, lockers_done
and current_locker_pid
contain the progress information for this phase.
waiting for readers before dropping
REINDEX CONCURRENTLY
is waiting for transactions with read locks on the table to finish, before dropping the old index. This phase is skipped when not in concurrent mode. Columns lockers_total
, lockers_done
and current_locker_pid
contain the progress information for this phase.
Whenever VACUUM
is running, the pg_stat_progress_vacuum
view will contain one row for each backend (including autovacuum worker processes) that is currently vacuuming. The tables below describe the information that will be reported and provide information about how to interpret it. Progress for VACUUM FULL
commands is reported via pg_stat_progress_cluster
because both VACUUM FULL
and CLUSTER
rewrite the table, while regular VACUUM
only modifies it in place. See Section 27.4.4.
pg_stat_progress_vacuum
ViewColumn Type
Description
pid
integer
Process ID of backend.
datid
oid
OID of the database to which this backend is connected.
datname
name
Name of the database to which this backend is connected.
relid
oid
OID of the table being vacuumed.
phase
text
heap_blks_total
bigint
Total number of heap blocks in the table. This number is reported as of the beginning of the scan; blocks added later will not be (and need not be) visited by this VACUUM
.
heap_blks_scanned
bigint
heap_blks_vacuumed
bigint
Number of heap blocks vacuumed. Unless the table has no indexes, this counter only advances when the phase is vacuuming heap
. Blocks that contain no dead tuples are skipped, so the counter may sometimes skip forward in large increments.
index_vacuum_count
bigint
Number of completed index vacuum cycles.
max_dead_tuples
bigint
num_dead_tuples
bigint
Number of dead tuples collected since the last index vacuum cycle.
initializing
VACUUM
is preparing to begin scanning the heap. This phase is expected to be very brief.
scanning heap
VACUUM
is currently scanning the heap. It will prune and defragment each page if required, and possibly perform freezing activity. The heap_blks_scanned
column can be used to monitor the progress of the scan.
vacuuming indexes
vacuuming heap
VACUUM
is currently vacuuming the heap. Vacuuming the heap is distinct from scanning the heap, and occurs after each instance of vacuuming indexes. If heap_blks_scanned
is less than heap_blks_total
, the system will return to scanning the heap after this phase is completed; otherwise, it will begin cleaning up indexes after this phase is completed.
cleaning up indexes
VACUUM
is currently cleaning up indexes. This occurs after the heap has been completely scanned and all vacuuming of the indexes and the heap has been completed.
truncating heap
VACUUM
is currently truncating the heap so as to return empty pages at the end of the relation to the operating system. This occurs after cleaning up indexes.
performing final cleanup
VACUUM
is performing final cleanup. During this phase, VACUUM
will vacuum the free space map, update statistics in pg_class
, and report statistics to the statistics collector. When this phase is completed, VACUUM
will end.
Whenever CLUSTER
or VACUUM FULL
is running, the pg_stat_progress_cluster
view will contain a row for each backend that is currently running either command. The tables below describe the information that will be reported and provide information about how to interpret it.
pg_stat_progress_cluster
ViewColumn Type
Description
pid
integer
Process ID of backend.
datid
oid
OID of the database to which this backend is connected.
datname
name
Name of the database to which this backend is connected.
relid
oid
OID of the table being clustered.
command
text
The command that is running. Either CLUSTER
or VACUUM FULL
.
phase
text
cluster_index_relid
oid
If the table is being scanned using an index, this is the OID of the index being used; otherwise, it is zero.
heap_tuples_scanned
bigint
Number of heap tuples scanned. This counter only advances when the phase is seq scanning heap
, index scanning heap
or writing new heap
.
heap_tuples_written
bigint
Number of heap tuples written. This counter only advances when the phase is seq scanning heap
, index scanning heap
or writing new heap
.
heap_blks_total
bigint
Total number of heap blocks in the table. This number is reported as of the beginning of seq scanning heap
.
heap_blks_scanned
bigint
Number of heap blocks scanned. This counter only advances when the phase is seq scanning heap
.
index_rebuild_count
bigint
Number of indexes rebuilt. This counter only advances when the phase is rebuilding index
.
initializing
The command is preparing to begin scanning the heap. This phase is expected to be very brief.
seq scanning heap
The command is currently scanning the table using a sequential scan.
index scanning heap
CLUSTER
is currently scanning the table using an index scan.
sorting tuples
CLUSTER
is currently sorting tuples.
writing new heap
CLUSTER
is currently writing the new heap.
swapping relation files
The command is currently swapping newly-built files into place.
rebuilding index
The command is currently rebuilding an index.
performing final cleanup
The command is performing final cleanup. When this phase is completed, CLUSTER
or VACUUM FULL
will end.
Whenever an application like pg_basebackup is taking a base backup, the pg_stat_progress_basebackup
view will contain a row for each WAL sender process that is currently running the BASE_BACKUP
replication command and streaming the backup. The tables below describe the information that will be reported and provide information about how to interpret it.
pg_stat_progress_basebackup
ViewColumn Type
Description
pid
integer
Process ID of a WAL sender process.
phase
text
backup_total
bigint
Total amount of data that will be streamed. This is estimated and reported as of the beginning of streaming database files
phase. Note that this is only an approximation since the database may change during streaming database files
phase and WAL log may be included in the backup later. This is always the same value as backup_streamed
once the amount of data streamed exceeds the estimated total size. If the estimation is disabled in pg_basebackup (i.e., --no-estimate-size
option is specified), this is NULL
.
backup_streamed
bigint
Amount of data streamed. This counter only advances when the phase is streaming database files
or transferring wal files
.
tablespaces_total
bigint
Total number of tablespaces that will be streamed.
tablespaces_streamed
bigint
Number of tablespaces streamed. This counter only advances when the phase is streaming database files
.
initializing
The WAL sender process is preparing to begin the backup. This phase is expected to be very brief.
waiting for checkpoint to finish
The WAL sender process is currently performing pg_start_backup
to prepare to take a base backup, and waiting for the start-of-backup checkpoint to finish.
estimating backup size
The WAL sender process is currently estimating the total amount of database files that will be streamed as a base backup.
streaming database files
The WAL sender process is currently streaming database files as a base backup.
waiting for wal archiving to finish
The WAL sender process is currently performing pg_stop_backup
to finish the backup, and waiting for all the WAL files required for the base backup to be successfully archived. If either --wal-method=none
or --wal-method=stream
is specified in pg_basebackup, the backup will end when this phase is completed.
transferring wal files
The WAL sender process is currently transferring all WAL logs generated during the backup. This phase occurs after waiting for wal archiving to finish
phase if --wal-method=fetch
is specified in pg_basebackup. The backup will end when this phase is completed.
Current processing phase. See .
Current processing phase of index creation. See .
Current processing phase of vacuum. See .
Number of heap blocks scanned. Because the is used to optimize scans, some blocks will be skipped without inspection; skipped blocks are included in this total, so that this number will eventually become equal to heap_blks_total
when the vacuum is complete. This counter only advances when the phase is scanning heap
.
Number of dead tuples that we can store before needing to perform an index vacuum cycle, based on .
VACUUM
is currently vacuuming the indexes. If a table has any indexes, this will happen at least once per vacuum, after the heap has been completely scanned. It may happen multiple times per vacuum if is insufficient to store the number of dead tuples found.
Current processing phase. See .
Current processing phase. See .
PostgreSQL provides facilities to support dynamic tracing of the database server. This allows an external utility to be called at specific points in the code and thereby trace execution.
A number of probes or trace points are already inserted into the source code. These probes are intended to be used by database developers and administrators. By default the probes are not compiled into PostgreSQL; the user needs to explicitly tell the configure script to make the probes available.
Currently, the DTrace utility is supported, which, at the time of this writing, is available on Solaris, macOS, FreeBSD, NetBSD, and Oracle Linux. The SystemTap project for Linux provides a DTrace equivalent and can also be used. Supporting other dynamic tracing utilities is theoretically possible by changing the definitions for the macros in src/include/utils/probes.h
.
By default, probes are not available, so you will need to explicitly tell the configure script to make the probes available in PostgreSQL. To include DTrace support specify --enable-dtrace
to configure. See Section 16.4 for further information.
A number of standard probes are provided in the source code, as shown in Table 27.28; Table 27.29 shows the types used in the probes. More probes can certainly be added to enhance PostgreSQL's observability.
transaction-start
(LocalTransactionId)
Probe that fires at the start of a new transaction. arg0 is the transaction ID.
transaction-commit
(LocalTransactionId)
Probe that fires when a transaction completes successfully. arg0 is the transaction ID.
transaction-abort
(LocalTransactionId)
Probe that fires when a transaction completes unsuccessfully. arg0 is the transaction ID.
query-start
(const char *)
Probe that fires when the processing of a query is started. arg0 is the query string.
query-done
(const char *)
Probe that fires when the processing of a query is complete. arg0 is the query string.
query-parse-start
(const char *)
Probe that fires when the parsing of a query is started. arg0 is the query string.
query-parse-done
(const char *)
Probe that fires when the parsing of a query is complete. arg0 is the query string.
query-rewrite-start
(const char *)
Probe that fires when the rewriting of a query is started. arg0 is the query string.
query-rewrite-done
(const char *)
Probe that fires when the rewriting of a query is complete. arg0 is the query string.
query-plan-start
()
Probe that fires when the planning of a query is started.
query-plan-done
()
Probe that fires when the planning of a query is complete.
query-execute-start
()
Probe that fires when the execution of a query is started.
query-execute-done
()
Probe that fires when the execution of a query is complete.
statement-status
(const char *)
Probe that fires anytime the server process updates its pg_stat_activity
.status
. arg0 is the new status string.
checkpoint-start
(int)
Probe that fires when a checkpoint is started. arg0 holds the bitwise flags used to distinguish different checkpoint types, such as shutdown, immediate or force.
checkpoint-done
(int, int, int, int, int)
Probe that fires when a checkpoint is complete. (The probes listed next fire in sequence during checkpoint processing.) arg0 is the number of buffers written. arg1 is the total number of buffers. arg2, arg3 and arg4 contain the number of WAL files added, removed and recycled respectively.
clog-checkpoint-start
(bool)
Probe that fires when the CLOG portion of a checkpoint is started. arg0 is true for normal checkpoint, false for shutdown checkpoint.
clog-checkpoint-done
(bool)
Probe that fires when the CLOG portion of a checkpoint is complete. arg0 has the same meaning as for clog-checkpoint-start
.
subtrans-checkpoint-start
(bool)
Probe that fires when the SUBTRANS portion of a checkpoint is started. arg0 is true for normal checkpoint, false for shutdown checkpoint.
subtrans-checkpoint-done
(bool)
Probe that fires when the SUBTRANS portion of a checkpoint is complete. arg0 has the same meaning as for subtrans-checkpoint-start
.
multixact-checkpoint-start
(bool)
Probe that fires when the MultiXact portion of a checkpoint is started. arg0 is true for normal checkpoint, false for shutdown checkpoint.
multixact-checkpoint-done
(bool)
Probe that fires when the MultiXact portion of a checkpoint is complete. arg0 has the same meaning as for multixact-checkpoint-start
.
buffer-checkpoint-start
(int)
Probe that fires when the buffer-writing portion of a checkpoint is started. arg0 holds the bitwise flags used to distinguish different checkpoint types, such as shutdown, immediate or force.
buffer-sync-start
(int, int)
Probe that fires when we begin to write dirty buffers during checkpoint (after identifying which buffers must be written). arg0 is the total number of buffers. arg1 is the number that are currently dirty and need to be written.
buffer-sync-written
(int)
Probe that fires after each buffer is written during checkpoint. arg0 is the ID number of the buffer.
buffer-sync-done
(int, int, int)
Probe that fires when all dirty buffers have been written. arg0 is the total number of buffers. arg1 is the number of buffers actually written by the checkpoint process. arg2 is the number that were expected to be written (arg1 of buffer-sync-start
); any difference reflects other processes flushing buffers during the checkpoint.
buffer-checkpoint-sync-start
()
Probe that fires after dirty buffers have been written to the kernel, and before starting to issue fsync requests.
buffer-checkpoint-done
()
Probe that fires when syncing of buffers to disk is complete.
twophase-checkpoint-start
()
Probe that fires when the two-phase portion of a checkpoint is started.
twophase-checkpoint-done
()
Probe that fires when the two-phase portion of a checkpoint is complete.
buffer-read-start
(ForkNumber, BlockNumber, Oid, Oid, Oid, int, bool)
Probe that fires when a buffer read is started. arg0 and arg1 contain the fork and block numbers of the page (but arg1 will be -1 if this is a relation extension request). arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId
(-1) for a shared buffer. arg6 is true for a relation extension request, false for normal read.
buffer-read-done
(ForkNumber, BlockNumber, Oid, Oid, Oid, int, bool, bool)
Probe that fires when a buffer read is complete. arg0 and arg1 contain the fork and block numbers of the page (if this is a relation extension request, arg1 now contains the block number of the newly added block). arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId
(-1) for a shared buffer. arg6 is true for a relation extension request, false for normal read. arg7 is true if the buffer was found in the pool, false if not.
buffer-flush-start
(ForkNumber, BlockNumber, Oid, Oid, Oid)
Probe that fires before issuing any write request for a shared buffer. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation.
buffer-flush-done
(ForkNumber, BlockNumber, Oid, Oid, Oid)
Probe that fires when a write request is complete. (Note that this just reflects the time to pass the data to the kernel; it's typically not actually been written to disk yet.) The arguments are the same as for buffer-flush-start
.
buffer-write-dirty-start
(ForkNumber, BlockNumber, Oid, Oid, Oid)
buffer-write-dirty-done
(ForkNumber, BlockNumber, Oid, Oid, Oid)
Probe that fires when a dirty-buffer write is complete. The arguments are the same as for buffer-write-dirty-start
.
wal-buffer-write-dirty-start
()
wal-buffer-write-dirty-done
()
Probe that fires when a dirty WAL buffer write is complete.
wal-insert
(unsigned char, unsigned char)
Probe that fires when a WAL record is inserted. arg0 is the resource manager (rmid) for the record. arg1 contains the info flags.
wal-switch
()
Probe that fires when a WAL segment switch is requested.
smgr-md-read-start
(ForkNumber, BlockNumber, Oid, Oid, Oid, int)
Probe that fires when beginning to read a block from a relation. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId
(-1) for a shared buffer.
smgr-md-read-done
(ForkNumber, BlockNumber, Oid, Oid, Oid, int, int, int)
Probe that fires when a block read is complete. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId
(-1) for a shared buffer. arg6 is the number of bytes actually read, while arg7 is the number requested (if these are different it indicates trouble).
smgr-md-write-start
(ForkNumber, BlockNumber, Oid, Oid, Oid, int)
Probe that fires when beginning to write a block to a relation. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId
(-1) for a shared buffer.
smgr-md-write-done
(ForkNumber, BlockNumber, Oid, Oid, Oid, int, int, int)
Probe that fires when a block write is complete. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId
(-1) for a shared buffer. arg6 is the number of bytes actually written, while arg7 is the number requested (if these are different it indicates trouble).
sort-start
(int, bool, int, int, bool, int)
Probe that fires when a sort operation is started. arg0 indicates heap, index or datum sort. arg1 is true for unique-value enforcement. arg2 is the number of key columns. arg3 is the number of kilobytes of work memory allowed. arg4 is true if random access to the sort result is required. arg5 indicates serial when 0
, parallel worker when 1
, or parallel leader when 2
.
sort-done
(bool, long)
Probe that fires when a sort is complete. arg0 is true for external sort, false for internal sort. arg1 is the number of disk blocks used for an external sort, or kilobytes of memory used for an internal sort.
lwlock-acquire
(char *, LWLockMode)
Probe that fires when an LWLock has been acquired. arg0 is the LWLock's tranche. arg1 is the requested lock mode, either exclusive or shared.
lwlock-release
(char *)
Probe that fires when an LWLock has been released (but note that any released waiters have not yet been awakened). arg0 is the LWLock's tranche.
lwlock-wait-start
(char *, LWLockMode)
Probe that fires when an LWLock was not immediately available and a server process has begun to wait for the lock to become available. arg0 is the LWLock's tranche. arg1 is the requested lock mode, either exclusive or shared.
lwlock-wait-done
(char *, LWLockMode)
Probe that fires when a server process has been released from its wait for an LWLock (it does not actually have the lock yet). arg0 is the LWLock's tranche. arg1 is the requested lock mode, either exclusive or shared.
lwlock-condacquire
(char *, LWLockMode)
Probe that fires when an LWLock was successfully acquired when the caller specified no waiting. arg0 is the LWLock's tranche. arg1 is the requested lock mode, either exclusive or shared.
lwlock-condacquire-fail
(char *, LWLockMode)
Probe that fires when an LWLock was not successfully acquired when the caller specified no waiting. arg0 is the LWLock's tranche. arg1 is the requested lock mode, either exclusive or shared.
lock-wait-start
(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE)
Probe that fires when a request for a heavyweight lock (lmgr lock) has begun to wait because the lock is not available. arg0 through arg3 are the tag fields identifying the object being locked. arg4 indicates the type of object being locked. arg5 indicates the lock type being requested.
lock-wait-done
(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE)
Probe that fires when a request for a heavyweight lock (lmgr lock) has finished waiting (i.e., has acquired the lock). The arguments are the same as for lock-wait-start
.
deadlock-found
()
Probe that fires when a deadlock is found by the deadlock detector.
LocalTransactionId
unsigned int
LWLockMode
int
LOCKMODE
int
BlockNumber
unsigned int
Oid
unsigned int
ForkNumber
int
bool
unsigned char
The example below shows a DTrace script for analyzing transaction counts in the system, as an alternative to snapshotting pg_stat_database
before and after a performance test:
When executed, the example D script gives output such as:
SystemTap uses a different notation for trace scripts than DTrace does, even though the underlying trace points are compatible. One point worth noting is that at this writing, SystemTap scripts must reference probe names using double underscores in place of hyphens. This is expected to be fixed in future SystemTap releases.
You should remember that DTrace scripts need to be carefully written and debugged, otherwise the trace information collected might be meaningless. In most cases where problems are found it is the instrumentation that is at fault, not the underlying system. When discussing information found using dynamic tracing, be sure to enclose the script used to allow that too to be checked and discussed.
New probes can be defined within the code wherever the developer desires, though this will require a recompilation. Below are the steps for inserting new probes:
Decide on probe names and data to be made available through the probes
Add the probe definitions to src/backend/utils/probes.d
Include pg_trace.h
if it is not already present in the module(s) containing the probe points, and insert TRACE_POSTGRESQL
probe macros at the desired locations in the source code
Recompile and verify that the new probes are available
**Example: ** Here is an example of how you would add a probe to trace all new transactions by transaction ID.
Decide that the probe will be named transaction-start
and requires a parameter of type LocalTransactionId
Add the probe definition to src/backend/utils/probes.d
:
Note the use of the double underline in the probe name. In a DTrace script using the probe, the double underline needs to be replaced with a hyphen, so transaction-start
is the name to document for users.
At compile time, transaction__start
is converted to a macro called TRACE_POSTGRESQL_TRANSACTION_START
(notice the underscores are single here), which is available by including pg_trace.h
. Add the macro call to the appropriate location in the source code. In this case, it looks like the following:
After recompiling and running the new binary, check that your newly added probe is available by executing the following DTrace command. You should see similar output:
There are a few things to be careful about when adding trace macros to the C code:
You should take care that the data types specified for a probe's parameters match the data types of the variables used in the macro. Otherwise, you will get compilation errors.
On most platforms, if PostgreSQL is built with --enable-dtrace
, the arguments to a trace macro will be evaluated whenever control passes through the macro, even if no tracing is being done. This is usually not worth worrying about if you are just reporting the values of a few local variables. But beware of putting expensive function calls into the arguments. If you need to do that, consider protecting the macro with a check to see if the trace is actually enabled:
Each trace macro has a corresponding ENABLED
macro.
Probe that fires when a server process begins to write a dirty buffer. (If this happens often, it implies that is too small or the background writer control parameters need adjustment.) arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation.
Probe that fires when a server process begins to write a dirty WAL buffer because no more WAL buffer space is available. (If this happens often, it implies that is too small.)
版本:11
PostgreSQL's statistics collector is a subsystem that supports collection and reporting of information about server activity. Presently, the collector can count accesses to tables and indexes in both disk-block and individual-row terms. It also tracks the total number of rows in each table, and information about vacuum and analyze actions for each table. It can also count calls to user-defined functions and the total time spent in each one.
PostgreSQL also supports reporting dynamic information about exactly what is going on in the system right now, such as the exact command currently being executed by other server processes, and which other connections exist in the system. This facility is independent of the collector process.
Since collection of statistics adds some overhead to query execution, the system can be configured to collect or not collect information. This is controlled by configuration parameters that are normally set in postgresql.conf
. (See Chapter 19 for details about setting configuration parameters.)
The parameter track_activities enables monitoring of the current command being executed by any server process.
The parameter track_counts controls whether statistics are collected about table and index accesses.
The parameter track_functions enables tracking of usage of user-defined functions.
The parameter track_io_timing enables monitoring of block read and write times.
Normally these parameters are set in postgresql.conf
so that they apply to all server processes, but it is possible to turn them on or off in individual sessions using the SET command. (To prevent ordinary users from hiding their activity from the administrator, only superusers are allowed to change these parameters with SET
.)
The statistics collector transmits the collected information to other PostgreSQL processes through temporary files. These files are stored in the directory named by the stats_temp_directory parameter, pg_stat_tmp
by default. For better performance, stats_temp_directory
can be pointed at a RAM-based file system, decreasing physical I/O requirements. When the server shuts down cleanly, a permanent copy of the statistics data is stored in the pg_stat
subdirectory, so that statistics can be retained across server restarts. When recovery is performed at server start (e.g., after immediate shutdown, server crash, and point-in-time recovery), all statistics counters are reset.
Several predefined views, listed in Table 27.1, are available to show the current state of the system. There are also several other views, listed in Table 27.2, available to show the results of statistics collection. Alternatively, one can build custom views using the underlying statistics functions, as discussed in Section 27.2.20.
When using the statistics to monitor collected data, it is important to realize that the information does not update instantaneously. Each individual server process transmits new statistical counts to the collector just before going idle; so a query or transaction still in progress does not affect the displayed totals. Also, the collector itself emits a new report at most once per PGSTAT_STAT_INTERVAL
milliseconds (500 ms unless altered while building the server). So the displayed information lags behind actual activity. However, current-query information collected by track_activities
is always up-to-date.
Another important point is that when a server process is asked to display any of these statistics, it first fetches the most recent report emitted by the collector process and then continues to use this snapshot for all statistical views and functions until the end of its current transaction. So the statistics will show static information as long as you continue the current transaction. Similarly, information about the current queries of all sessions is collected when any such information is first requested within a transaction, and the same information will be displayed throughout the transaction. This is a feature, not a bug, because it allows you to perform several queries on the statistics and correlate the results without worrying that the numbers are changing underneath you. But if you want to see new results with each query, be sure to do the queries outside any transaction block. Alternatively, you can invoke pg_stat_clear_snapshot
(), which will discard the current transaction's statistics snapshot (if any). The next use of statistical information will cause a new snapshot to be fetched.
A transaction can also see its own statistics (as yet untransmitted to the collector) in the views pg_stat_xact_all_tables
, pg_stat_xact_sys_tables
, pg_stat_xact_user_tables
, and pg_stat_xact_user_functions
. These numbers do not act as stated above; instead they update continuously throughout the transaction.
Some of the information in the dynamic statistics views shown in Table 27.1 is security restricted. Ordinary users can only see all the information about their own sessions (sessions belonging to a role that they are a member of). In rows about other sessions, many columns will be null. Note, however, that the existence of a session and its general properties such as its sessions user and database are visible to all users. Superusers and members of the built-in role pg_read_all_stats
(see also Section 21.5) can see all the information about all sessions.
pg_stat_activity
pg_stat_replication
pg_stat_wal_receiver
pg_stat_subscription
pg_stat_ssl
pg_stat_gssapi
pg_stat_progress_analyze
pg_stat_progress_create_index
pg_stat_progress_vacuum
pg_stat_progress_cluster
pg_stat_progress_basebackup
pg_stat_archiver
pg_stat_bgwriter
pg_stat_database
pg_stat_database_conflicts
pg_stat_all_tables
pg_stat_sys_tables
除了僅顯示系統資料表之外,與 pg_stat_all_tables 相同。
pg_stat_user_tables
除了僅顯示使用者資料表之外,與 pg_stat_all_tables 相同。
pg_stat_xact_all_tables
Similar to pg_stat_all_tables
, but counts actions taken so far within the current transaction (which are not yet included in pg_stat_all_tables
and related views). The columns for numbers of live and dead rows and vacuum and analyze actions are not present in this view.
pg_stat_xact_sys_tables
Same as pg_stat_xact_all_tables
, except that only system tables are shown.
pg_stat_xact_user_tables
Same as pg_stat_xact_all_tables
, except that only user tables are shown.
pg_stat_all_indexes
pg_stat_sys_indexes
Same as pg_stat_all_indexes
, except that only indexes on system tables are shown.
pg_stat_user_indexes
Same as pg_stat_all_indexes
, except that only indexes on user tables are shown.
pg_statio_all_tables
pg_statio_sys_tables
Same as pg_statio_all_tables
, except that only system tables are shown.
pg_statio_user_tables
Same as pg_statio_all_tables
, except that only user tables are shown.
pg_statio_all_indexes
pg_statio_sys_indexes
Same as pg_statio_all_indexes
, except that only indexes on system tables are shown.
pg_statio_user_indexes
Same as pg_statio_all_indexes
, except that only indexes on user tables are shown.
pg_statio_all_sequences
pg_statio_sys_sequences
Same as pg_statio_all_sequences
, except that only system sequences are shown. (Presently, no system sequences are defined, so this view is always empty.)
pg_statio_user_sequences
Same as pg_statio_all_sequences
, except that only user sequences are shown.
pg_stat_user_functions
pg_stat_xact_user_functions
Similar to pg_stat_user_functions
, but counts only calls during the current transaction (which are not yet included in pg_stat_user_functions
).
pg_stat_slru
The per-index statistics are particularly useful to determine which indexes are being used and how effective they are.
The pg_statio_
views are primarily useful to determine the effectiveness of the buffer cache. When the number of actual disk reads is much smaller than the number of buffer hits, then the cache is satisfying most read requests without invoking a kernel call. However, these statistics do not give the entire story: due to the way in which PostgreSQL handles disk I/O, data that is not in the PostgreSQL buffer cache might still reside in the kernel's I/O cache, and might therefore still be fetched without requiring a physical read. Users interested in obtaining more detailed information on PostgreSQL I/O behavior are advised to use the PostgreSQL statistics collector in combination with operating system utilities that allow insight into the kernel's handling of I/O.
pg_stat_activity
The pg_stat_activity
view will have one row per server process, showing information related to the current activity of that process.
pg_stat_activity
ViewColumn Type
Description
datid
oid
OID of the database this backend is connected to
datname
name
Name of the database this backend is connected to
pid
integer
Process ID of this backend
leader_pid
integer
Process ID of the parallel group leader, if this process is a parallel query worker. NULL
if this process is a parallel group leader or does not participate in parallel query.
usesysid
oid
OID of the user logged into this backend
usename
name
Name of the user logged into this backend
application_name
text
Name of the application that is connected to this backend
client_addr
inet
IP address of the client connected to this backend. If this field is null, it indicates either that the client is connected via a Unix socket on the server machine or that this is an internal process such as autovacuum.
client_hostname
text
client_port
integer
TCP port number that the client is using for communication with this backend, or -1
if a Unix socket is used. If this field is null, it indicates that this is an internal server process.
backend_start
timestamp with time zone
Time when this process was started. For client backends, this is the time the client connected to the server.
xact_start
timestamp with time zone
Time when this process' current transaction was started, or null if no transaction is active. If the current query is the first of its transaction, this column is equal to the query_start
column.
query_start
timestamp with time zone
Time when the currently active query was started, or if state
is not active
, when the last query was started
state_change
timestamp with time zone
Time when the state
was last changed
wait_event_type
text
wait_event
text
state
text
Current overall state of this backend. Possible values are:
active
: The backend is executing a query.
idle
: The backend is waiting for a new client command.
idle in transaction
: The backend is in a transaction, but is not currently executing a query.
idle in transaction (aborted)
: This state is similar to idle in transaction
, except one of the statements in the transaction caused an error.
fastpath function call
: The backend is executing a fast-path function.
backend_xid
xid
Top-level transaction identifier of this backend, if any.
backend_xmin
xid
The current backend's xmin
horizon.
query
text
backend_type
text
Type of current backend. Possible types are autovacuum launcher
, autovacuum worker
, logical replication launcher
, logical replication worker
, parallel worker
, background writer
, client backend
, checkpointer
, startup
, walreceiver
, walsender
and walwriter
. In addition, background workers registered by extensions may have additional types.
The wait_event
and state
columns are independent. If a backend is in the active
state, it may or may not be waiting
on some event. If the state is active
and wait_event
is non-null, it means that a query is being executed, but is being blocked somewhere in the system.
Activity
BufferPin
Client
Extension
IO
IPC
Lock
LWLock
Timeout
Activity
Activity
Wait Event
Description
ArchiverMain
Waiting in main loop of archiver process.
AutoVacuumMain
Waiting in main loop of autovacuum launcher process.
BgWriterHibernate
Waiting in background writer process, hibernating.
BgWriterMain
Waiting in main loop of background writer process.
CheckpointerMain
Waiting in main loop of checkpointer process.
LogicalApplyMain
Waiting in main loop of logical replication apply process.
LogicalLauncherMain
Waiting in main loop of logical replication launcher process.
PgStatMain
Waiting in main loop of statistics collector process.
RecoveryWalStream
Waiting in main loop of startup process for WAL to arrive, during streaming recovery.
SysLoggerMain
Waiting in main loop of syslogger process.
WalReceiverMain
Waiting in main loop of WAL receiver process.
WalSenderMain
Waiting in main loop of WAL sender process.
WalWriterMain
Waiting in main loop of WAL writer process.
BufferPin
BufferPin
Wait Event
Description
BufferPin
Waiting to acquire an exclusive pin on a buffer.
Client
Client
Wait Event
Description
ClientRead
Waiting to read data from the client.
ClientWrite
Waiting to write data to the client.
GSSOpenServer
Waiting to read data from the client while establishing a GSSAPI session.
LibPQWalReceiverConnect
Waiting in WAL receiver to establish connection to remote server.
LibPQWalReceiverReceive
Waiting in WAL receiver to receive data from remote server.
SSLOpenServer
Waiting for SSL while attempting connection.
WalReceiverWaitStart
Waiting for startup process to send initial data for streaming replication.
WalSenderWaitForWAL
Waiting for WAL to be flushed in WAL sender process.
WalSenderWriteData
Waiting for any activity when processing replies from WAL receiver in WAL sender process.
Extension
Extension
Wait Event
Description
Extension
Waiting in an extension.
IO
IO
Wait Event
Description
BufFileRead
Waiting for a read from a buffered file.
BufFileWrite
Waiting for a write to a buffered file.
ControlFileRead
Waiting for a read from the pg_control
file.
ControlFileSync
Waiting for the pg_control
file to reach durable storage.
ControlFileSyncUpdate
Waiting for an update to the pg_control
file to reach durable storage.
ControlFileWrite
Waiting for a write to the pg_control
file.
ControlFileWriteUpdate
Waiting for a write to update the pg_control
file.
CopyFileRead
Waiting for a read during a file copy operation.
CopyFileWrite
Waiting for a write during a file copy operation.
DSMFillZeroWrite
Waiting to fill a dynamic shared memory backing file with zeroes.
DataFileExtend
Waiting for a relation data file to be extended.
DataFileFlush
Waiting for a relation data file to reach durable storage.
DataFileImmediateSync
Waiting for an immediate synchronization of a relation data file to durable storage.
DataFilePrefetch
Waiting for an asynchronous prefetch from a relation data file.
DataFileRead
Waiting for a read from a relation data file.
DataFileSync
Waiting for changes to a relation data file to reach durable storage.
DataFileTruncate
Waiting for a relation data file to be truncated.
DataFileWrite
Waiting for a write to a relation data file.
LockFileAddToDataDirRead
Waiting for a read while adding a line to the data directory lock file.
LockFileAddToDataDirSync
Waiting for data to reach durable storage while adding a line to the data directory lock file.
LockFileAddToDataDirWrite
Waiting for a write while adding a line to the data directory lock file.
LockFileCreateRead
Waiting to read while creating the data directory lock file.
LockFileCreateSync
Waiting for data to reach durable storage while creating the data directory lock file.
LockFileCreateWrite
Waiting for a write while creating the data directory lock file.
LockFileReCheckDataDirRead
Waiting for a read during recheck of the data directory lock file.
LogicalRewriteCheckpointSync
Waiting for logical rewrite mappings to reach durable storage during a checkpoint.
LogicalRewriteMappingSync
Waiting for mapping data to reach durable storage during a logical rewrite.
LogicalRewriteMappingWrite
Waiting for a write of mapping data during a logical rewrite.
LogicalRewriteSync
Waiting for logical rewrite mappings to reach durable storage.
LogicalRewriteTruncate
Waiting for truncate of mapping data during a logical rewrite.
LogicalRewriteWrite
Waiting for a write of logical rewrite mappings.
RelationMapRead
Waiting for a read of the relation map file.
RelationMapSync
Waiting for the relation map file to reach durable storage.
RelationMapWrite
Waiting for a write to the relation map file.
ReorderBufferRead
Waiting for a read during reorder buffer management.
ReorderBufferWrite
Waiting for a write during reorder buffer management.
ReorderLogicalMappingRead
Waiting for a read of a logical mapping during reorder buffer management.
ReplicationSlotRead
Waiting for a read from a replication slot control file.
ReplicationSlotRestoreSync
Waiting for a replication slot control file to reach durable storage while restoring it to memory.
ReplicationSlotSync
Waiting for a replication slot control file to reach durable storage.
ReplicationSlotWrite
Waiting for a write to a replication slot control file.
SLRUFlushSync
Waiting for SLRU data to reach durable storage during a checkpoint or database shutdown.
SLRURead
Waiting for a read of an SLRU page.
SLRUSync
Waiting for SLRU data to reach durable storage following a page write.
SLRUWrite
Waiting for a write of an SLRU page.
SnapbuildRead
Waiting for a read of a serialized historical catalog snapshot.
SnapbuildSync
Waiting for a serialized historical catalog snapshot to reach durable storage.
SnapbuildWrite
Waiting for a write of a serialized historical catalog snapshot.
TimelineHistoryFileSync
Waiting for a timeline history file received via streaming replication to reach durable storage.
TimelineHistoryFileWrite
Waiting for a write of a timeline history file received via streaming replication.
TimelineHistoryRead
Waiting for a read of a timeline history file.
TimelineHistorySync
Waiting for a newly created timeline history file to reach durable storage.
TimelineHistoryWrite
Waiting for a write of a newly created timeline history file.
TwophaseFileRead
Waiting for a read of a two phase state file.
TwophaseFileSync
Waiting for a two phase state file to reach durable storage.
TwophaseFileWrite
Waiting for a write of a two phase state file.
WALBootstrapSync
Waiting for WAL to reach durable storage during bootstrapping.
WALBootstrapWrite
Waiting for a write of a WAL page during bootstrapping.
WALCopyRead
Waiting for a read when creating a new WAL segment by copying an existing one.
WALCopySync
Waiting for a new WAL segment created by copying an existing one to reach durable storage.
WALCopyWrite
Waiting for a write when creating a new WAL segment by copying an existing one.
WALInitSync
Waiting for a newly initialized WAL file to reach durable storage.
WALInitWrite
Waiting for a write while initializing a new WAL file.
WALRead
Waiting for a read from a WAL file.
WALSenderTimelineHistoryRead
Waiting for a read from a timeline history file during a walsender timeline command.
WALSync
Waiting for a WAL file to reach durable storage.
WALSyncMethodAssign
Waiting for data to reach durable storage while assigning a new WAL sync method.
WALWrite
Waiting for a write to a WAL file.
IPC
IPC
Wait Event
Description
BackupWaitWalArchive
Waiting for WAL files required for a backup to be successfully archived.
BgWorkerShutdown
Waiting for background worker to shut down.
BgWorkerStartup
Waiting for background worker to start up.
BtreePage
Waiting for the page number needed to continue a parallel B-tree scan to become available.
CheckpointDone
Waiting for a checkpoint to complete.
CheckpointStart
Waiting for a checkpoint to start.
ExecuteGather
Waiting for activity from a child process while executing a Gather
plan node.
HashBatchAllocate
Waiting for an elected Parallel Hash participant to allocate a hash table.
HashBatchElect
Waiting to elect a Parallel Hash participant to allocate a hash table.
HashBatchLoad
Waiting for other Parallel Hash participants to finish loading a hash table.
HashBuildAllocate
Waiting for an elected Parallel Hash participant to allocate the initial hash table.
HashBuildElect
Waiting to elect a Parallel Hash participant to allocate the initial hash table.
HashBuildHashInner
Waiting for other Parallel Hash participants to finish hashing the inner relation.
HashBuildHashOuter
Waiting for other Parallel Hash participants to finish partitioning the outer relation.
HashGrowBatchesAllocate
Waiting for an elected Parallel Hash participant to allocate more batches.
HashGrowBatchesDecide
Waiting to elect a Parallel Hash participant to decide on future batch growth.
HashGrowBatchesElect
Waiting to elect a Parallel Hash participant to allocate more batches.
HashGrowBatchesFinish
Waiting for an elected Parallel Hash participant to decide on future batch growth.
HashGrowBatchesRepartition
Waiting for other Parallel Hash participants to finish repartitioning.
HashGrowBucketsAllocate
Waiting for an elected Parallel Hash participant to finish allocating more buckets.
HashGrowBucketsElect
Waiting to elect a Parallel Hash participant to allocate more buckets.
HashGrowBucketsReinsert
Waiting for other Parallel Hash participants to finish inserting tuples into new buckets.
LogicalSyncData
Waiting for a logical replication remote server to send data for initial table synchronization.
LogicalSyncStateChange
Waiting for a logical replication remote server to change state.
MessageQueueInternal
Waiting for another process to be attached to a shared message queue.
MessageQueuePutMessage
Waiting to write a protocol message to a shared message queue.
MessageQueueReceive
Waiting to receive bytes from a shared message queue.
MessageQueueSend
Waiting to send bytes to a shared message queue.
ParallelBitmapScan
Waiting for parallel bitmap scan to become initialized.
ParallelCreateIndexScan
Waiting for parallel CREATE INDEX
workers to finish heap scan.
ParallelFinish
Waiting for parallel workers to finish computing.
ProcArrayGroupUpdate
Waiting for the group leader to clear the transaction ID at end of a parallel operation.
ProcSignalBarrier
Waiting for a barrier event to be processed by all backends.
Promote
Waiting for standby promotion.
RecoveryConflictSnapshot
Waiting for recovery conflict resolution for a vacuum cleanup.
RecoveryConflictTablespace
Waiting for recovery conflict resolution for dropping a tablespace.
RecoveryPause
Waiting for recovery to be resumed.
ReplicationOriginDrop
Waiting for a replication origin to become inactive so it can be dropped.
ReplicationSlotDrop
Waiting for a replication slot to become inactive so it can be dropped.
SafeSnapshot
Waiting to obtain a valid snapshot for a READ ONLY DEFERRABLE
transaction.
SyncRep
Waiting for confirmation from a remote server during synchronous replication.
XactGroupUpdate
Waiting for the group leader to update transaction status at end of a parallel operation.
Lock
Lock
Wait Event
Description
advisory
Waiting to acquire an advisory user lock.
extend
Waiting to extend a relation.
frozenid
Waiting to update pg_database
.datfrozenxid
and pg_database
.datminmxid
.
object
Waiting to acquire a lock on a non-relation database object.
page
Waiting to acquire a lock on a page of a relation.
relation
Waiting to acquire a lock on a relation.
spectoken
Waiting to acquire a speculative insertion lock.
transactionid
Waiting for a transaction to finish.
tuple
Waiting to acquire a lock on a tuple.
userlock
Waiting to acquire a user lock.
virtualxid
Waiting to acquire a virtual transaction ID lock.
LWLock
LWLock
Wait Event
Description
AddinShmemInit
Waiting to manage an extension's space allocation in shared memory.
AutoFile
Waiting to update the postgresql.auto.conf
file.
Autovacuum
Waiting to read or update the current state of autovacuum workers.
AutovacuumSchedule
Waiting to ensure that a table selected for autovacuum still needs vacuuming.
BackgroundWorker
Waiting to read or update background worker state.
BtreeVacuum
Waiting to read or update vacuum-related information for a B-tree index.
BufferContent
Waiting to access a data page in memory.
BufferIO
Waiting for I/O on a data page.
BufferMapping
Waiting to associate a data block with a buffer in the buffer pool.
Checkpoint
Waiting to begin a checkpoint.
CheckpointerComm
Waiting to manage fsync requests.
CommitTs
Waiting to read or update the last value set for a transaction commit timestamp.
CommitTsBuffer
Waiting for I/O on a commit timestamp SLRU buffer.
CommitTsSLRU
Waiting to access the commit timestamp SLRU cache.
ControlFile
Waiting to read or update the pg_control
file or create a new WAL file.
DynamicSharedMemoryControl
Waiting to read or update dynamic shared memory allocation information.
LockFastPath
Waiting to read or update a process' fast-path lock information.
LockManager
Waiting to read or update information about “heavyweight” locks.
LogicalRepWorker
Waiting to read or update the state of logical replication workers.
MultiXactGen
Waiting to read or update shared multixact state.
MultiXactMemberBuffer
Waiting for I/O on a multixact member SLRU buffer.
MultiXactMemberSLRU
Waiting to access the multixact member SLRU cache.
MultiXactOffsetBuffer
Waiting for I/O on a multixact offset SLRU buffer.
MultiXactOffsetSLRU
Waiting to access the multixact offset SLRU cache.
MultiXactTruncation
Waiting to read or truncate multixact information.
NotifyBuffer
Waiting for I/O on a NOTIFY
message SLRU buffer.
NotifyQueue
Waiting to read or update NOTIFY
messages.
NotifyQueueTail
Waiting to update limit on NOTIFY
message storage.
NotifySLRU
Waiting to access the NOTIFY
message SLRU cache.
OidGen
Waiting to allocate a new OID.
OldSnapshotTimeMap
Waiting to read or update old snapshot control information.
ParallelAppend
Waiting to choose the next subplan during Parallel Append plan execution.
ParallelHashJoin
Waiting to synchronize workers during Parallel Hash Join plan execution.
ParallelQueryDSA
Waiting for parallel query dynamic shared memory allocation.
PerSessionDSA
Waiting for parallel query dynamic shared memory allocation.
PerSessionRecordType
Waiting to access a parallel query's information about composite types.
PerSessionRecordTypmod
Waiting to access a parallel query's information about type modifiers that identify anonymous record types.
PerXactPredicateList
Waiting to access the list of predicate locks held by the current serializable transaction during a parallel query.
PredicateLockManager
Waiting to access predicate lock information used by serializable transactions.
ProcArray
Waiting to access the shared per-process data structures (typically, to get a snapshot or report a session's transaction ID).
RelationMapping
Waiting to read or update a pg_filenode.map
file (used to track the filenode assignments of certain system catalogs).
RelCacheInit
Waiting to read or update a pg_internal.init
relation cache initialization file.
ReplicationOrigin
Waiting to create, drop or use a replication origin.
ReplicationOriginState
Waiting to read or update the progress of one replication origin.
ReplicationSlotAllocation
Waiting to allocate or free a replication slot.
ReplicationSlotControl
Waiting to read or update replication slot state.
ReplicationSlotIO
Waiting for I/O on a replication slot.
SerialBuffer
Waiting for I/O on a serializable transaction conflict SLRU buffer.
SerializableFinishedList
Waiting to access the list of finished serializable transactions.
SerializablePredicateList
Waiting to access the list of predicate locks held by serializable transactions.
SerializableXactHash
Waiting to read or update information about serializable transactions.
SerialSLRU
Waiting to access the serializable transaction conflict SLRU cache.
SharedTidBitmap
Waiting to access a shared TID bitmap during a parallel bitmap index scan.
SharedTupleStore
Waiting to access a shared tuple store during parallel query.
ShmemIndex
Waiting to find or allocate space in shared memory.
SInvalRead
Waiting to retrieve messages from the shared catalog invalidation queue.
SInvalWrite
Waiting to add a message to the shared catalog invalidation queue.
SubtransBuffer
Waiting for I/O on a sub-transaction SLRU buffer.
SubtransSLRU
Waiting to access the sub-transaction SLRU cache.
SyncRep
Waiting to read or update information about the state of synchronous replication.
SyncScan
Waiting to select the starting location of a synchronized table scan.
TablespaceCreate
Waiting to create or drop a tablespace.
TwoPhaseState
Waiting to read or update the state of prepared transactions.
WALBufMapping
Waiting to replace a page in WAL buffers.
WALInsert
Waiting to insert WAL data into a memory buffer.
WALWrite
Waiting for WAL buffers to be written to disk.
WrapLimitsVacuum
Waiting to update limits on transaction id and multixact consumption.
XactBuffer
Waiting for I/O on a transaction status SLRU buffer.
XactSLRU
Waiting to access the transaction status SLRU cache.
XactTruncation
Waiting to execute pg_xact_status
or update the oldest transaction ID available to it.
XidGen
Waiting to allocate a new transaction ID.
Extensions can add LWLock
types to the list shown in Table 27.12. In some cases, the name assigned by an extension will not be available in all server processes; so an LWLock
wait event might be reported as just “extension
” rather than the extension-assigned name.
Timeout
Timeout
Wait Event
Description
BaseBackupThrottle
Waiting during base backup when throttling activity.
PgSleep
Waiting due to a call to pg_sleep
or a sibling function.
RecoveryApplyDelay
Waiting to apply WAL during recovery because of a delay setting.
RecoveryRetrieveRetryInterval
Waiting during recovery when WAL data is not available from any source (pg_wal
, archive or stream).
VacuumDelay
Waiting in a cost-based vacuum delay point.
Here is an example of how wait events can be viewed:
pg_stat_replication
pg_stat_replication 檢視表中的每一筆資料表示每個 WAL 發送者的執行程序,顯示有關複製到該發送者的備用伺服器統計資訊。僅列出直接連接的備用資料庫,不會包含其下游備用伺服器的資訊。
pg_stat_replication
ViewColumn Type
Description
pid
integer
Process ID of a WAL sender process
usesysid
oid
OID of the user logged into this WAL sender process
usename
name
Name of the user logged into this WAL sender process
application_name
text
Name of the application that is connected to this WAL sender
client_addr
inet
IP address of the client connected to this WAL sender. If this field is null, it indicates that the client is connected via a Unix socket on the server machine.
client_hostname
text
client_port
integer
TCP port number that the client is using for communication with this WAL sender, or -1
if a Unix socket is used
backend_start
timestamp with time zone
Time when this process was started, i.e., when the client connected to this WAL sender
backend_xmin
xid
state
text
Current WAL sender state. Possible values are:
startup
: This WAL sender is starting up.
catchup
: This WAL sender's connected standby is catching up with the primary.
streaming
: This WAL sender is streaming changes after its connected standby server has caught up with the primary.
backup
: This WAL sender is sending a backup.
stopping
: This WAL sender is stopping.
sent_lsn
pg_lsn
Last write-ahead log location sent on this connection
write_lsn
pg_lsn
Last write-ahead log location written to disk by this standby server
flush_lsn
pg_lsn
Last write-ahead log location flushed to disk by this standby server
replay_lsn
pg_lsn
Last write-ahead log location replayed into the database on this standby server
write_lag
interval
Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written it (but not yet flushed it or applied it). This can be used to gauge the delay that synchronous_commit
level remote_write
incurred while committing if this server was configured as a synchronous standby.
flush_lag
interval
Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written and flushed it (but not yet applied it). This can be used to gauge the delay that synchronous_commit
level on
incurred while committing if this server was configured as a synchronous standby.
replay_lag
interval
Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written, flushed and applied it. This can be used to gauge the delay that synchronous_commit
level remote_apply
incurred while committing if this server was configured as a synchronous standby.
sync_priority
integer
Priority of this standby server for being chosen as the synchronous standby in a priority-based synchronous replication. This has no effect in a quorum-based synchronous replication.
sync_state
text
Synchronous state of this standby server. Possible values are:
async
: This standby server is asynchronous.
potential
: This standby server is now asynchronous, but can potentially become synchronous if one of current synchronous ones fails.
sync
: This standby server is synchronous.
quorum
: This standby server is considered as a candidate for quorum standbys.
reply_time
timestamp with time zone
最後從備用伺服器收到回覆訊息的發送時間
The lag times reported in the pg_stat_replication
view are measurements of the time taken for recent WAL to be written, flushed and replayed and for the sender to know about it. These times represent the commit delay that was (or would have been) introduced by each synchronous commit level, if the remote server was configured as a synchronous standby. For an asynchronous standby, the replay_lag
column approximates the delay before recent transactions became visible to queries. If the standby server has entirely caught up with the sending server and there is no more WAL activity, the most recently measured lag times will continue to be displayed for a short time and then show NULL.
Lag times work automatically for physical replication. Logical decoding plugins may optionally emit tracking messages; if they do not, the tracking mechanism will simply display NULL lag.
The reported lag times are not predictions of how long it will take for the standby to catch up with the sending server assuming the current rate of replay. Such a system would show similar times while new WAL is being generated, but would differ when the sender becomes idle. In particular, when the standby has caught up completely, pg_stat_replication
shows the time taken to write, flush and replay the most recent reported WAL location rather than zero as some users might expect. This is consistent with the goal of measuring synchronous commit and transaction visibility delays for recent write transactions. To reduce confusion for users expecting a different model of lag, the lag columns revert to NULL after a short time on a fully replayed idle system. Monitoring systems should choose whether to represent this as missing data, zero or continue to display the last known value.
pg_stat_wal_receiver
The pg_stat_wal_receiver
view will contain only one row, showing statistics about the WAL receiver from that receiver's connected server.
pg_stat_wal_receiver
ViewColumn Type
Description
pid
integer
Process ID of the WAL receiver process
status
text
Activity status of the WAL receiver process
receive_start_lsn
pg_lsn
First write-ahead log location used when WAL receiver is started
receive_start_tli
integer
First timeline number used when WAL receiver is started
written_lsn
pg_lsn
Last write-ahead log location already received and written to disk, but not flushed. This should not be used for data integrity checks.
flushed_lsn
pg_lsn
Last write-ahead log location already received and flushed to disk, the initial value of this field being the first log location used when WAL receiver is started
received_tli
integer
Timeline number of last write-ahead log location received and flushed to disk, the initial value of this field being the timeline number of the first log location used when WAL receiver is started
last_msg_send_time
timestamp with time zone
Send time of last message received from origin WAL sender
last_msg_receipt_time
timestamp with time zone
Receipt time of last message received from origin WAL sender
latest_end_lsn
pg_lsn
Last write-ahead log location reported to origin WAL sender
latest_end_time
timestamp with time zone
Time of last write-ahead log location reported to origin WAL sender
slot_name
text
Replication slot name used by this WAL receiver
sender_host
text
Host of the PostgreSQL instance this WAL receiver is connected to. This can be a host name, an IP address, or a directory path if the connection is via Unix socket. (The path case can be distinguished because it will always be an absolute path, beginning with /
.)
sender_port
integer
Port number of the PostgreSQL instance this WAL receiver is connected to.
conninfo
text
Connection string used by this WAL receiver, with security-sensitive fields obfuscated.
pg_stat_subscription
The pg_stat_subscription
view will contain one row per subscription for main worker (with null PID if the worker is not running), and additional rows for workers handling the initial data copy of the subscribed tables.
pg_stat_subscription
ViewColumn Type
Description
subid
oid
OID of the subscription
subname
name
Name of the subscription
pid
integer
Process ID of the subscription worker process
relid
oid
OID of the relation that the worker is synchronizing; null for the main apply worker
received_lsn
pg_lsn
Last write-ahead log location received, the initial value of this field being 0
last_msg_send_time
timestamp with time zone
Send time of last message received from origin WAL sender
last_msg_receipt_time
timestamp with time zone
Receipt time of last message received from origin WAL sender
latest_end_lsn
pg_lsn
Last write-ahead log location reported to origin WAL sender
latest_end_time
timestamp with time zone
Time of last write-ahead log location reported to origin WAL sender
pg_stat_ssl
The pg_stat_ssl
view will contain one row per backend or WAL sender process, showing statistics about SSL usage on this connection. It can be joined to pg_stat_activity
or pg_stat_replication
on the pid
column to get more details about the connection.
pg_stat_ssl
ViewColumn Type
Description
pid
integer
Process ID of a backend or WAL sender process
ssl
boolean
True if SSL is used on this connection
version
text
Version of SSL in use, or NULL if SSL is not in use on this connection
cipher
text
Name of SSL cipher in use, or NULL if SSL is not in use on this connection
bits
integer
Number of bits in the encryption algorithm used, or NULL if SSL is not used on this connection
compression
boolean
True if SSL compression is in use, false if not, or NULL if SSL is not in use on this connection
client_dn
text
Distinguished Name (DN) field from the client certificate used, or NULL if no client certificate was supplied or if SSL is not in use on this connection. This field is truncated if the DN field is longer than NAMEDATALEN
(64 characters in a standard build).
client_serial
numeric
Serial number of the client certificate, or NULL if no client certificate was supplied or if SSL is not in use on this connection. The combination of certificate serial number and certificate issuer uniquely identifies a certificate (unless the issuer erroneously reuses serial numbers).
issuer_dn
text
DN of the issuer of the client certificate, or NULL if no client certificate was supplied or if SSL is not in use on this connection. This field is truncated like client_dn
.
pg_stat_gssapi
The pg_stat_gssapi
view will contain one row per backend, showing information about GSSAPI usage on this connection. It can be joined to pg_stat_activity
or pg_stat_replication
on the pid
column to get more details about the connection.
pg_stat_gssapi
ViewColumn Type
Description
pid
integer
Process ID of a backend
gss_authenticated
boolean
True if GSSAPI authentication was used for this connection
principal
text
Principal used to authenticate this connection, or NULL if GSSAPI was not used to authenticate this connection. This field is truncated if the principal is longer than NAMEDATALEN
(64 characters in a standard build).
encrypted
boolean
True if GSSAPI encryption is in use on this connection
pg_stat_archiver
The pg_stat_archiver
view will always have a single row, containing data about the archiver process of the cluster.
pg_stat_archiver
ViewColumn Type
Description
archived_count
bigint
Number of WAL files that have been successfully archived
last_archived_wal
text
Name of the last WAL file successfully archived
last_archived_time
timestamp with time zone
Time of the last successful archive operation
failed_count
bigint
Number of failed attempts for archiving WAL files
last_failed_wal
text
Name of the WAL file of the last failed archival operation
last_failed_time
timestamp with time zone
Time of the last failed archival operation
stats_reset
timestamp with time zone
Time at which these statistics were last reset
pg_stat_bgwriter
The pg_stat_bgwriter
view will always have a single row, containing global data for the cluster.
pg_stat_bgwriter
ViewColumn Type
Description
checkpoints_timed
bigint
Number of scheduled checkpoints that have been performed
checkpoints_req
bigint
Number of requested checkpoints that have been performed
checkpoint_write_time
double precision
Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk, in milliseconds
checkpoint_sync_time
double precision
Total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk, in milliseconds
buffers_checkpoint
bigint
Number of buffers written during checkpoints
buffers_clean
bigint
Number of buffers written by the background writer
maxwritten_clean
bigint
Number of times the background writer stopped a cleaning scan because it had written too many buffers
buffers_backend
bigint
Number of buffers written directly by a backend
buffers_backend_fsync
bigint
Number of times a backend had to execute its own fsync
call (normally the background writer handles those even when the backend does its own write)
buffers_alloc
bigint
Number of buffers allocated
stats_reset
timestamp with time zone
Time at which these statistics were last reset
pg_stat_database
pg_stat_database 檢視表將為叢集中的每個資料庫綜合為一筆資料,再加上每個共享物件,列出資料庫層級的統計資訊。
pg_stat_database
ViewColumn Type
Description
datid
oid
OID of this database, or 0 for objects belonging to a shared relation
datname
name
Name of this database, or NULL
for shared objects.
numbackends
integer
Number of backends currently connected to this database, or NULL
for shared objects. This is the only column in this view that returns a value reflecting current state; all other columns return the accumulated values since the last reset.
xact_commit
bigint
Number of transactions in this database that have been committed
xact_rollback
bigint
Number of transactions in this database that have been rolled back
blks_read
bigint
Number of disk blocks read in this database
blks_hit
bigint
Number of times disk blocks were found already in the buffer cache, so that a read was not necessary (this only includes hits in the PostgreSQL buffer cache, not the operating system's file system cache)
tup_returned
bigint
Number of rows returned by queries in this database
tup_fetched
bigint
Number of rows fetched by queries in this database
tup_inserted
bigint
Number of rows inserted by queries in this database
tup_updated
bigint
Number of rows updated by queries in this database
tup_deleted
bigint
Number of rows deleted by queries in this database
conflicts
bigint
temp_files
bigint
temp_bytes
bigint
deadlocks
bigint
Number of deadlocks detected in this database
checksum_failures
bigint
Number of data page checksum failures detected in this database (or on a shared object), or NULL if data checksums are not enabled.
checksum_last_failure
timestamp with time zone
Time at which the last data page checksum failure was detected in this database (or on a shared object), or NULL if data checksums are not enabled.
blk_read_time
double precision
blk_write_time
double precision
stats_reset
timestamp with time zone
上次重置這個統計資訊的時間
pg_stat_database_conflicts
The pg_stat_database_conflicts
view will contain one row per database, showing database-wide statistics about query cancels occurring due to conflicts with recovery on standby servers. This view will only contain information on standby servers, since conflicts do not occur on master servers.
pg_stat_database_conflicts
ViewColumn Type
Description
datid
oid
OID of a database
datname
name
Name of this database
confl_tablespace
bigint
Number of queries in this database that have been canceled due to dropped tablespaces
confl_lock
bigint
Number of queries in this database that have been canceled due to lock timeouts
confl_snapshot
bigint
Number of queries in this database that have been canceled due to old snapshots
confl_bufferpin
bigint
Number of queries in this database that have been canceled due to pinned buffers
confl_deadlock
bigint
Number of queries in this database that have been canceled due to deadlocks
pg_stat_all_tables
pg_stat_all_tables 檢視表將為目前資料庫中的每個資料表(包括 TOAST 資料表)為一筆資料,顯示有關對該資料表的存取統計數據。 pg_stat_user_tables 和 pg_stat_sys_tables 檢視表包含相同的資訊,差別是過濾之後僅顯示使用者表和系統資料表。
pg_stat_all_tables
ViewColumn Type
Description
relid
oid
OID of a table
schemaname
name
Name of the schema that this table is in
relname
name
Name of this table
seq_scan
bigint
Number of sequential scans initiated on this table
seq_tup_read
bigint
Number of live rows fetched by sequential scans
idx_scan
bigint
Number of index scans initiated on this table
idx_tup_fetch
bigint
Number of live rows fetched by index scans
n_tup_ins
bigint
Number of rows inserted
n_tup_upd
bigint
Number of rows updated (includes HOT updated rows)
n_tup_del
bigint
Number of rows deleted
n_tup_hot_upd
bigint
Number of rows HOT updated (i.e., with no separate index update required)
n_live_tup
bigint
Estimated number of live rows
n_dead_tup
bigint
Estimated number of dead rows
n_mod_since_analyze
bigint
Estimated number of rows modified since this table was last analyzed
n_ins_since_vacuum
bigint
Estimated number of rows inserted since this table was last vacuumed
last_vacuum
timestamp with time zone
Last time at which this table was manually vacuumed (not counting VACUUM FULL
)
last_autovacuum
timestamp with time zone
Last time at which this table was vacuumed by the autovacuum daemon
last_analyze
timestamp with time zone
Last time at which this table was manually analyzed
last_autoanalyze
timestamp with time zone
Last time at which this table was analyzed by the autovacuum daemon
vacuum_count
bigint
Number of times this table has been manually vacuumed (not counting VACUUM FULL
)
autovacuum_count
bigint
Number of times this table has been vacuumed by the autovacuum daemon
analyze_count
bigint
Number of times this table has been manually analyzed
autoanalyze_count
bigint
Number of times this table has been analyzed by the autovacuum daemon
pg_stat_all_indexes
The pg_stat_all_indexes
view will contain one row for each index in the current database, showing statistics about accesses to that specific index. The pg_stat_user_indexes
and pg_stat_sys_indexes
views contain the same information, but filtered to only show user and system indexes respectively.
pg_stat_all_indexes
ViewColumn Type
Description
relid
oid
OID of the table for this index
indexrelid
oid
OID of this index
schemaname
name
Name of the schema this index is in
relname
name
Name of the table for this index
indexrelname
name
Name of this index
idx_scan
bigint
Number of index scans initiated on this index
idx_tup_read
bigint
Number of index entries returned by scans on this index
idx_tup_fetch
bigint
Number of live table rows fetched by simple index scans using this index
Indexes can be used by simple index scans, “bitmap” index scans, and the optimizer. In a bitmap scan the output of several indexes can be combined via AND or OR rules, so it is difficult to associate individual heap row fetches with specific indexes when a bitmap scan is used. Therefore, a bitmap scan increments the pg_stat_all_indexes
.idx_tup_read
count(s) for the index(es) it uses, and it increments the pg_stat_all_tables
.idx_tup_fetch
count for the table, but it does not affect pg_stat_all_indexes
.idx_tup_fetch
. The optimizer also accesses indexes to check for supplied constants whose values are outside the recorded range of the optimizer statistics because the optimizer statistics might be stale.
The idx_tup_read
and idx_tup_fetch
counts can be different even without any use of bitmap scans, because idx_tup_read
counts index entries retrieved from the index while idx_tup_fetch
counts live rows fetched from the table. The latter will be less if any dead or not-yet-committed rows are fetched using the index, or if any heap fetches are avoided by means of an index-only scan.
pg_statio_all_tables
The pg_statio_all_tables
view will contain one row for each table in the current database (including TOAST tables), showing statistics about I/O on that specific table. The pg_statio_user_tables
and pg_statio_sys_tables
views contain the same information, but filtered to only show user and system tables respectively.
pg_statio_all_tables
ViewColumn Type
Description
relid
oid
OID of a table
schemaname
name
Name of the schema that this table is in
relname
name
Name of this table
heap_blks_read
bigint
Number of disk blocks read from this table
heap_blks_hit
bigint
Number of buffer hits in this table
idx_blks_read
bigint
Number of disk blocks read from all indexes on this table
idx_blks_hit
bigint
Number of buffer hits in all indexes on this table
toast_blks_read
bigint
Number of disk blocks read from this table's TOAST table (if any)
toast_blks_hit
bigint
Number of buffer hits in this table's TOAST table (if any)
tidx_blks_read
bigint
Number of disk blocks read from this table's TOAST table indexes (if any)
tidx_blks_hit
bigint
Number of buffer hits in this table's TOAST table indexes (if any)
pg_statio_all_indexes
The pg_statio_all_indexes
view will contain one row for each index in the current database, showing statistics about I/O on that specific index. The pg_statio_user_indexes
and pg_statio_sys_indexes
views contain the same information, but filtered to only show user and system indexes respectively.
pg_statio_all_indexes
ViewColumn Type
Description
relid
oid
OID of the table for this index
indexrelid
oid
OID of this index
schemaname
name
Name of the schema this index is in
relname
name
Name of the table for this index
indexrelname
name
Name of this index
idx_blks_read
bigint
Number of disk blocks read from this index
idx_blks_hit
bigint
Number of buffer hits in this index
pg_statio_all_sequences
The pg_statio_all_sequences
view will contain one row for each sequence in the current database, showing statistics about I/O on that specific sequence.
pg_statio_all_sequences
ViewColumn Type
Description
relid
oid
OID of a sequence
schemaname
name
Name of the schema this sequence is in
relname
name
Name of this sequence
blks_read
bigint
Number of disk blocks read from this sequence
blks_hit
bigint
Number of buffer hits in this sequence
pg_stat_user_functions
The pg_stat_user_functions
view will contain one row for each tracked function, showing statistics about executions of that function. The track_functions parameter controls exactly which functions are tracked.
pg_stat_user_functions
ViewColumn Type
Description
funcid
oid
OID of a function
schemaname
name
Name of the schema this function is in
funcname
name
Name of this function
calls
bigint
Number of times this function has been called
total_time
double precision
Total time spent in this function and all other functions called by it, in milliseconds
self_time
double precision
Total time spent in this function itself, not including other functions called by it, in milliseconds
pg_stat_slru
PostgreSQL accesses certain on-disk information via SLRU (simple least-recently-used) caches. The pg_stat_slru
view will contain one row for each tracked SLRU cache, showing statistics about access to cached pages.
pg_stat_slru
ViewColumn Type
Description
name
text
Name of the SLRU
blks_zeroed
bigint
Number of blocks zeroed during initializations
blks_hit
bigint
Number of times disk blocks were found already in the SLRU, so that a read was not necessary (this only includes hits in the SLRU, not the operating system's file system cache)
blks_read
bigint
Number of disk blocks read for this SLRU
blks_written
bigint
Number of disk blocks written for this SLRU
blks_exists
bigint
Number of blocks checked for existence for this SLRU
flushes
bigint
Number of flushes of dirty data for this SLRU
truncates
bigint
Number of truncates for this SLRU
stats_reset
timestamp with time zone
Time at which these statistics were last reset
Other ways of looking at the statistics can be set up by writing queries that use the same underlying statistics access functions used by the standard views shown above. For details such as the functions' names, consult the definitions of the standard views. (For example, in psql you could issue \d+ pg_stat_activity
.) The access functions for per-database statistics take a database OID as an argument to identify which database to report on. The per-table and per-index functions take a table or index OID. The functions for per-function statistics take a function OID. Note that only tables, indexes, and functions in the current database can be seen with these functions.
Additional functions related to statistics collection are listed in Table 27.30.
Function
Description
pg_backend_pid
() → integer
Returns the process ID of the server process attached to the current session.
pg_stat_get_activity
( integer
) → setof record
Returns a record of information about the backend with the specified process ID, or one record for each active backend in the system if NULL
is specified. The fields returned are a subset of those in the pg_stat_activity
view.
pg_stat_get_snapshot_timestamp
() → timestamp with time zone
Returns the timestamp of the current statistics snapshot.
pg_stat_clear_snapshot
() → void
Discards the current statistics snapshot.
pg_stat_reset
() → void
Resets all statistics counters for the current database to zero.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_stat_reset_shared
( text
) → void
Resets some cluster-wide statistics counters to zero, depending on the argument. The argument can be bgwriter
to reset all the counters shown in the pg_stat_bgwriter
view, or archiver
to reset all the counters shown in the pg_stat_archiver
view.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_stat_reset_single_table_counters
( oid
) → void
Resets statistics for a single table or index in the current database to zero.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_stat_reset_single_function_counters
( oid
) → void
Resets statistics for a single function in the current database to zero.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_stat_reset_slru
( text
) → void
Resets statistics to zero for a single SLRU cache, or for all SLRUs in the cluster. If the argument is NULL, all counters shown in the pg_stat_slru
view for all SLRU caches are reset. The argument can be one of CommitTs
, MultiXactMember
, MultiXactOffset
, Notify
, Serial
, Subtrans
, or Xact
to reset the counters for only that entry. If the argument is other
(or indeed, any unrecognized name), then the counters for all other SLRU caches, such as extension-defined caches, are reset.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_stat_get_activity
, the underlying function of the pg_stat_activity
view, returns a set of records containing all the available information about each backend process. Sometimes it may be more convenient to obtain just a subset of this information. In such cases, an older set of per-backend statistics access functions can be used; these are shown in Table 27.31. These access functions use a backend ID number, which ranges from one to the number of currently active backends. The function pg_stat_get_backend_idset
provides a convenient way to generate one row for each active backend for invoking these functions. For example, to show the PIDs and current queries of all backends:
Function
Description
pg_stat_get_backend_idset
() → setof integer
Returns the set of currently active backend ID numbers (from 1 to the number of active backends).
pg_stat_get_backend_activity
( integer
) → text
Returns the text of this backend's most recent query.
pg_stat_get_backend_activity_start
( integer
) → timestamp with time zone
Returns the time when the backend's most recent query was started.
pg_stat_get_backend_client_addr
( integer
) → inet
Returns the IP address of the client connected to this backend.
pg_stat_get_backend_client_port
( integer
) → integer
Returns the TCP port number that the client is using for communication.
pg_stat_get_backend_dbid
( integer
) → oid
Returns the OID of the database this backend is connected to.
pg_stat_get_backend_pid
( integer
) → integer
Returns the process ID of this backend.
pg_stat_get_backend_start
( integer
) → timestamp with time zone
Returns the time when this process was started.
pg_stat_get_backend_userid
( integer
) → oid
Returns the OID of the user logged into this backend.
pg_stat_get_backend_wait_event_type
( integer
) → text
pg_stat_get_backend_wait_event
( integer
) → text
pg_stat_get_backend_xact_start
( integer
) → timestamp with time zone
Returns the time when the backend's current transaction was started.
One row per server process, showing information related to the current activity of that process, such as state and current query. See for details.
One row per WAL sender process, showing statistics about replication to that sender's connected standby server. See for details.
Only one row, showing statistics about the WAL receiver from that receiver's connected server. See for details.
At least one row per subscription, showing information about the subscription workers. See for details.
One row per connection (regular and replication), showing information about SSL used on this connection. See for details.
One row per connection (regular and replication), showing information about GSSAPI authentication and encryption used on this connection. See for details.
One row for each backend (including autovacuum worker processes) running ANALYZE
, showing current progress. See .
One row for each backend running CREATE INDEX
or REINDEX
, showing current progress. See .
One row for each backend (including autovacuum worker processes) running VACUUM
, showing current progress. See .
One row for each backend running CLUSTER
or VACUUM FULL
, showing current progress. See .
One row for each WAL sender process streaming a base backup, showing current progress. See .
One row only, showing statistics about the WAL archiver process's activity. See for details.
One row only, showing statistics about the background writer process's activity. See for details.
One row per database, showing database-wide statistics. See for details.
One row per database, showing database-wide statistics about query cancels due to conflict with recovery on standby servers. See for details.
目前資料庫中每個資料表為一筆資料,顯示有關對該資料表的存取統計數據。有關詳細資訊,請參閱 。
One row for each index in the current database, showing statistics about accesses to that specific index. See for details.
One row for each table in the current database, showing statistics about I/O on that specific table. See for details.
One row for each index in the current database, showing statistics about I/O on that specific index. See for details.
One row for each sequence in the current database, showing statistics about I/O on that specific sequence. See for details.
One row for each tracked function, showing statistics about executions of that function. See for details.
One row per SLRU, showing statistics of operations. See for details.
Host name of the connected client, as reported by a reverse DNS lookup of client_addr
. This field will only be non-null for IP connections, and only when is enabled.
The type of event for which the backend is waiting, if any; otherwise NULL. See .
Wait event name if backend is currently waiting, otherwise NULL. See through .
disabled
: This state is reported if is disabled in this backend.
Text of this backend's most recent query. If state
is active
this field shows the currently executing query. In all other states, it shows the last query that was executed. By default the query text is truncated at 1024 bytes; this value can be changed via the parameter .
The server process is idle. This event type indicates a process waiting for activity in its main processing loop. wait_event
will identify the specific wait point; see .
The server process is waiting for exclusive access to a data buffer. Buffer pin waits can be protracted if another process holds an open cursor that last read data from the buffer in question. See .
The server process is waiting for activity on a socket connected to a user application. Thus, the server expects something to happen that is independent of its internal processes. wait_event
will identify the specific wait point; see .
The server process is waiting for some condition defined by an extension module. See .
The server process is waiting for an I/O operation to complete. wait_event
will identify the specific wait point; see .
The server process is waiting for some interaction with another server process. wait_event
will identify the specific wait point; see .
The server process is waiting for a heavyweight lock. Heavyweight locks, also known as lock manager locks or simply locks, primarily protect SQL-visible objects such as tables. However, they are also used to ensure mutual exclusion for certain internal operations such as relation extension. wait_event
will identify the type of lock awaited; see .
The server process is waiting for a lightweight lock. Most such locks protect a particular data structure in shared memory. wait_event
will contain a name identifying the purpose of the lightweight lock. (Some locks have specific names; others are part of a group of locks each with a similar purpose.) See .
The server process is waiting for a timeout to expire. wait_event
will identify the specific wait point; see .
Host name of the connected client, as reported by a reverse DNS lookup of client_addr
. This field will only be non-null for IP connections, and only when is enabled.
This standby's xmin
horizon reported by .
Number of queries canceled due to conflicts with recovery in this database. (Conflicts occur only on standby servers; see for details.)
由此資料庫的查詢所建立的暫存檔案數量。無論建立暫存檔案的原因(例如排序或 Hash)如何,以及 如何設定,都會對所有的暫存檔案進行計數。
此資料庫中的查詢寫入暫存檔案的資料總量。無論建立暫時檔案的原因以及 設定為何,都將對所有的暫存檔案進行統計。
Time spent reading data file blocks by backends in this database, in milliseconds (if is enabled, otherwise zero)
Time spent writing data file blocks by backends in this database, in milliseconds (if is enabled, otherwise zero)
Returns the wait event type name if this backend is currently waiting, otherwise NULL. See for details.
Returns the wait event name if this backend is currently waiting, otherwise NULL. See through .